Test Report: Docker_Linux_docker_arm64 17340

                    
                      49babfe4fcdff3bcc398a25366bae00d3ae6dc66:2023-10-02:31256
                    
                

Test fail (3/320)

Order failed test Duration
25 TestAddons/parallel/Ingress 37.38
163 TestIngressAddonLegacy/serial/ValidateIngressAddons 51.95
219 TestMultiNode/serial/RestartKeepsNodes 272.03
x
+
TestAddons/parallel/Ingress (37.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-358443 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-358443 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-358443 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9a0c5861-fb35-409d-9a93-00d1d1a3392b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9a0c5861-fb35-409d-9a93-00d1d1a3392b] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.02495791s
addons_test.go:240: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context addons-358443 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.063871676s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-arm64 -p addons-358443 addons disable ingress-dns --alsologtostderr -v=1: (1.586444483s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-arm64 -p addons-358443 addons disable ingress --alsologtostderr -v=1: (7.707237226s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-358443
helpers_test.go:235: (dbg) docker inspect addons-358443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "865bad2c205bdaf5a2e12743a477e568ab1e340978de8c70316aef8d039f659f",
	        "Created": "2023-10-02T10:36:31.978554776Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2140645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T10:36:32.302663118Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/865bad2c205bdaf5a2e12743a477e568ab1e340978de8c70316aef8d039f659f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/865bad2c205bdaf5a2e12743a477e568ab1e340978de8c70316aef8d039f659f/hostname",
	        "HostsPath": "/var/lib/docker/containers/865bad2c205bdaf5a2e12743a477e568ab1e340978de8c70316aef8d039f659f/hosts",
	        "LogPath": "/var/lib/docker/containers/865bad2c205bdaf5a2e12743a477e568ab1e340978de8c70316aef8d039f659f/865bad2c205bdaf5a2e12743a477e568ab1e340978de8c70316aef8d039f659f-json.log",
	        "Name": "/addons-358443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-358443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-358443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c23bfdd65dd805689c1f068c6dc9985db9e65aa03eab72f614680ddc41b4f5b9-init/diff:/var/lib/docker/overlay2/1d88af17a205d2819b1e281e265595a32e0f15f4f368d2227a6ad399b77d9a22/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c23bfdd65dd805689c1f068c6dc9985db9e65aa03eab72f614680ddc41b4f5b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c23bfdd65dd805689c1f068c6dc9985db9e65aa03eab72f614680ddc41b4f5b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c23bfdd65dd805689c1f068c6dc9985db9e65aa03eab72f614680ddc41b4f5b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-358443",
	                "Source": "/var/lib/docker/volumes/addons-358443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-358443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-358443",
	                "name.minikube.sigs.k8s.io": "addons-358443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd59a3e6ae0a3b2d09ca6cacf0781fe308e5794f9b0e5e06b86f752abfd04b79",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35490"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35486"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35488"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35487"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fd59a3e6ae0a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-358443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "865bad2c205b",
	                        "addons-358443"
	                    ],
	                    "NetworkID": "7b4858a608a03ba26f6bf8722d70b336053323f97add14d119b8253f9d53f201",
	                    "EndpointID": "5b6f3018efbec03b029dbe38453fd68f623818c2c6396227f34ff5f9989192ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-358443 -n addons-358443
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-358443 logs -n 25: (1.296034483s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-211888   | jenkins | v1.31.2 | 02 Oct 23 10:35 UTC |                     |
	|         | -p download-only-211888              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-211888   | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |                     |
	|         | -p download-only-211888              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| delete  | -p download-only-211888              | download-only-211888   | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| delete  | -p download-only-211888              | download-only-211888   | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| start   | --download-only -p                   | download-docker-213108 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |                     |
	|         | download-docker-213108               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-213108            | download-docker-213108 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| start   | --download-only -p                   | binary-mirror-087071   | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |                     |
	|         | binary-mirror-087071                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36211               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-087071              | binary-mirror-087071   | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| start   | -p addons-358443 --wait=true         | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:38 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-358443 ip                     | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:38 UTC | 02 Oct 23 10:38 UTC |
	| addons  | addons-358443 addons disable         | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:38 UTC | 02 Oct 23 10:38 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-358443 addons                 | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:38 UTC | 02 Oct 23 10:38 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | addons-358443                        |                        |         |         |                     |                     |
	| ssh     | addons-358443 ssh curl -s            | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-358443 ip                     | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	| addons  | addons-358443 addons                 | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-358443 addons disable         | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-358443 addons disable         | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-358443 addons                 | addons-358443          | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:36:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:36:09.372930 2140188 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:36:09.373133 2140188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:09.373144 2140188 out.go:309] Setting ErrFile to fd 2...
	I1002 10:36:09.373151 2140188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:09.373452 2140188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:36:09.373957 2140188 out.go:303] Setting JSON to false
	I1002 10:36:09.374781 2140188 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65917,"bootTime":1696177053,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:36:09.374861 2140188 start.go:138] virtualization:  
	I1002 10:36:09.377791 2140188 out.go:177] * [addons-358443] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:36:09.380179 2140188 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:36:09.380361 2140188 notify.go:220] Checking for updates...
	I1002 10:36:09.382454 2140188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:36:09.384568 2140188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:36:09.386545 2140188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:36:09.388502 2140188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 10:36:09.390438 2140188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:36:09.392576 2140188 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:36:09.420709 2140188 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:36:09.420805 2140188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:36:09.508436 2140188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-02 10:36:09.498701165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:36:09.508540 2140188 docker.go:294] overlay module found
	I1002 10:36:09.511773 2140188 out.go:177] * Using the docker driver based on user configuration
	I1002 10:36:09.513660 2140188 start.go:298] selected driver: docker
	I1002 10:36:09.513684 2140188 start.go:902] validating driver "docker" against <nil>
	I1002 10:36:09.513709 2140188 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:36:09.514356 2140188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:36:09.583595 2140188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-02 10:36:09.574376898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:36:09.583771 2140188 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:36:09.584009 2140188 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:36:09.585970 2140188 out.go:177] * Using Docker driver with root privileges
	I1002 10:36:09.587726 2140188 cni.go:84] Creating CNI manager for ""
	I1002 10:36:09.587747 2140188 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 10:36:09.587758 2140188 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 10:36:09.587771 2140188 start_flags.go:321] config:
	{Name:addons-358443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-358443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:09.590022 2140188 out.go:177] * Starting control plane node addons-358443 in cluster addons-358443
	I1002 10:36:09.591822 2140188 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:36:09.593843 2140188 out.go:177] * Pulling base image ...
	I1002 10:36:09.595747 2140188 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:36:09.595794 2140188 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 10:36:09.595809 2140188 cache.go:57] Caching tarball of preloaded images
	I1002 10:36:09.595876 2140188 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:36:09.595890 2140188 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:36:09.596280 2140188 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/config.json ...
	I1002 10:36:09.596309 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/config.json: {Name:mkabb3b49f052201df872df574a61d4748a50945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:09.596462 2140188 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:36:09.613626 2140188 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 10:36:09.613765 2140188 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 10:36:09.613788 2140188 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I1002 10:36:09.613796 2140188 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I1002 10:36:09.613805 2140188 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 10:36:09.613815 2140188 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from local cache
	I1002 10:36:25.269680 2140188 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from cached tarball
	I1002 10:36:25.269717 2140188 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:36:25.269786 2140188 start.go:365] acquiring machines lock for addons-358443: {Name:mk673a3044e1f010c27598fe87a74bb939acb9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:36:25.269908 2140188 start.go:369] acquired machines lock for "addons-358443" in 99.947µs
	I1002 10:36:25.269939 2140188 start.go:93] Provisioning new machine with config: &{Name:addons-358443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-358443 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 10:36:25.270017 2140188 start.go:125] createHost starting for "" (driver="docker")
	I1002 10:36:25.272218 2140188 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1002 10:36:25.272444 2140188 start.go:159] libmachine.API.Create for "addons-358443" (driver="docker")
	I1002 10:36:25.272471 2140188 client.go:168] LocalClient.Create starting
	I1002 10:36:25.272576 2140188 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem
	I1002 10:36:25.513980 2140188 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem
	I1002 10:36:25.731285 2140188 cli_runner.go:164] Run: docker network inspect addons-358443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 10:36:25.749525 2140188 cli_runner.go:211] docker network inspect addons-358443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 10:36:25.749619 2140188 network_create.go:281] running [docker network inspect addons-358443] to gather additional debugging logs...
	I1002 10:36:25.749640 2140188 cli_runner.go:164] Run: docker network inspect addons-358443
	W1002 10:36:25.768145 2140188 cli_runner.go:211] docker network inspect addons-358443 returned with exit code 1
	I1002 10:36:25.768173 2140188 network_create.go:284] error running [docker network inspect addons-358443]: docker network inspect addons-358443: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-358443 not found
	I1002 10:36:25.768190 2140188 network_create.go:286] output of [docker network inspect addons-358443]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-358443 not found
	
	** /stderr **
	I1002 10:36:25.768262 2140188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:36:25.786051 2140188 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001149b30}
	I1002 10:36:25.786095 2140188 network_create.go:123] attempt to create docker network addons-358443 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 10:36:25.786154 2140188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-358443 addons-358443
	I1002 10:36:25.860744 2140188 network_create.go:107] docker network addons-358443 192.168.49.0/24 created
	I1002 10:36:25.860777 2140188 kic.go:117] calculated static IP "192.168.49.2" for the "addons-358443" container
	I1002 10:36:25.860849 2140188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 10:36:25.877576 2140188 cli_runner.go:164] Run: docker volume create addons-358443 --label name.minikube.sigs.k8s.io=addons-358443 --label created_by.minikube.sigs.k8s.io=true
	I1002 10:36:25.896531 2140188 oci.go:103] Successfully created a docker volume addons-358443
	I1002 10:36:25.896640 2140188 cli_runner.go:164] Run: docker run --rm --name addons-358443-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-358443 --entrypoint /usr/bin/test -v addons-358443:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 10:36:28.005998 2140188 cli_runner.go:217] Completed: docker run --rm --name addons-358443-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-358443 --entrypoint /usr/bin/test -v addons-358443:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (2.109297896s)
	I1002 10:36:28.006043 2140188 oci.go:107] Successfully prepared a docker volume addons-358443
	I1002 10:36:28.006066 2140188 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:36:28.006091 2140188 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 10:36:28.006185 2140188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-358443:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 10:36:31.887262 2140188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-358443:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (3.88102981s)
	I1002 10:36:31.887296 2140188 kic.go:199] duration metric: took 3.881201 seconds to extract preloaded images to volume
	W1002 10:36:31.887444 2140188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 10:36:31.887563 2140188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 10:36:31.962970 2140188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-358443 --name addons-358443 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-358443 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-358443 --network addons-358443 --ip 192.168.49.2 --volume addons-358443:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 10:36:32.311748 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Running}}
	I1002 10:36:32.333607 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:36:32.358606 2140188 cli_runner.go:164] Run: docker exec addons-358443 stat /var/lib/dpkg/alternatives/iptables
	I1002 10:36:32.424285 2140188 oci.go:144] the created container "addons-358443" has a running status.
	I1002 10:36:32.424316 2140188 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa...
	I1002 10:36:32.678525 2140188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 10:36:32.702919 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:36:32.741659 2140188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 10:36:32.741677 2140188 kic_runner.go:114] Args: [docker exec --privileged addons-358443 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 10:36:32.843847 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:36:32.866431 2140188 machine.go:88] provisioning docker machine ...
	I1002 10:36:32.866459 2140188 ubuntu.go:169] provisioning hostname "addons-358443"
	I1002 10:36:32.866523 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:32.898090 2140188 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:32.898522 2140188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35490 <nil> <nil>}
	I1002 10:36:32.898542 2140188 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-358443 && echo "addons-358443" | sudo tee /etc/hostname
	I1002 10:36:32.899115 2140188 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 10:36:36.058012 2140188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-358443
	
	I1002 10:36:36.058106 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:36.076523 2140188 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:36.076945 2140188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35490 <nil> <nil>}
	I1002 10:36:36.076968 2140188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-358443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-358443/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-358443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:36:36.214481 2140188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:36:36.214513 2140188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:36:36.214533 2140188 ubuntu.go:177] setting up certificates
	I1002 10:36:36.214542 2140188 provision.go:83] configureAuth start
	I1002 10:36:36.214614 2140188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-358443
	I1002 10:36:36.232235 2140188 provision.go:138] copyHostCerts
	I1002 10:36:36.232310 2140188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:36:36.232433 2140188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:36:36.232488 2140188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:36:36.232541 2140188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.addons-358443 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-358443]
	I1002 10:36:36.620384 2140188 provision.go:172] copyRemoteCerts
	I1002 10:36:36.620459 2140188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:36:36.620504 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:36.638531 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:36:36.736081 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:36:36.764075 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 10:36:36.791942 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1002 10:36:36.819503 2140188 provision.go:86] duration metric: configureAuth took 604.921849ms
	I1002 10:36:36.819531 2140188 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:36:36.819722 2140188 config.go:182] Loaded profile config "addons-358443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:36:36.819781 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:36.837465 2140188 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:36.837887 2140188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35490 <nil> <nil>}
	I1002 10:36:36.837898 2140188 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:36:36.979097 2140188 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:36:36.979122 2140188 ubuntu.go:71] root file system type: overlay
	I1002 10:36:36.979243 2140188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:36:36.979312 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:37.003620 2140188 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:37.004059 2140188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35490 <nil> <nil>}
	I1002 10:36:37.004150 2140188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:36:37.157012 2140188 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:36:37.157108 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:37.175236 2140188 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:37.175654 2140188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35490 <nil> <nil>}
	I1002 10:36:37.175680 2140188 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:36:38.025926 2140188 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:29:57.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-02 10:36:37.152387270 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 10:36:38.025962 2140188 machine.go:91] provisioned docker machine in 5.159512525s
	I1002 10:36:38.025973 2140188 client.go:171] LocalClient.Create took 12.753494253s
	I1002 10:36:38.025998 2140188 start.go:167] duration metric: libmachine.API.Create for "addons-358443" took 12.753554429s
	I1002 10:36:38.026008 2140188 start.go:300] post-start starting for "addons-358443" (driver="docker")
	I1002 10:36:38.026018 2140188 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:36:38.026103 2140188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:36:38.026175 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:38.045155 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:36:38.144334 2140188 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:36:38.148648 2140188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:36:38.148682 2140188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:36:38.148694 2140188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:36:38.148701 2140188 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:36:38.148720 2140188 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:36:38.148788 2140188 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:36:38.148809 2140188 start.go:303] post-start completed in 122.795318ms
	I1002 10:36:38.149112 2140188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-358443
	I1002 10:36:38.166361 2140188 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/config.json ...
	I1002 10:36:38.166648 2140188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:36:38.166697 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:38.190004 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:36:38.283333 2140188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:36:38.289059 2140188 start.go:128] duration metric: createHost completed in 13.019023234s
	I1002 10:36:38.289084 2140188 start.go:83] releasing machines lock for "addons-358443", held for 13.019162138s
	I1002 10:36:38.289173 2140188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-358443
	I1002 10:36:38.307117 2140188 ssh_runner.go:195] Run: cat /version.json
	I1002 10:36:38.307151 2140188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:36:38.307169 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:38.307207 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:36:38.328968 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:36:38.340692 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:36:38.421805 2140188 ssh_runner.go:195] Run: systemctl --version
	I1002 10:36:38.560274 2140188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:36:38.566046 2140188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:36:38.596114 2140188 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:36:38.596197 2140188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:36:38.630258 2140188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 10:36:38.630282 2140188 start.go:469] detecting cgroup driver to use...
	I1002 10:36:38.630316 2140188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:36:38.630424 2140188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:36:38.650013 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:36:38.662119 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:36:38.673772 2140188 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:36:38.673845 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:36:38.685375 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:36:38.697113 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:36:38.708579 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:36:38.720011 2140188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:36:38.731097 2140188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:36:38.742748 2140188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:36:38.752856 2140188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:36:38.762800 2140188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:36:38.861078 2140188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:36:38.970785 2140188 start.go:469] detecting cgroup driver to use...
	I1002 10:36:38.970830 2140188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:36:38.970889 2140188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:36:38.993005 2140188 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:36:38.993091 2140188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:36:39.009996 2140188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:36:39.034277 2140188 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:36:39.040317 2140188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:36:39.060640 2140188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:36:39.089196 2140188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:36:39.199737 2140188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:36:39.310261 2140188 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:36:39.310376 2140188 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:36:39.332928 2140188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:36:39.437933 2140188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:36:39.723526 2140188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:36:39.818844 2140188 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:36:39.914123 2140188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:36:40.015625 2140188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:36:40.131924 2140188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:36:40.152349 2140188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:36:40.256296 2140188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:36:40.338437 2140188 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:36:40.338601 2140188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:36:40.343839 2140188 start.go:537] Will wait 60s for crictl version
	I1002 10:36:40.343950 2140188 ssh_runner.go:195] Run: which crictl
	I1002 10:36:40.348764 2140188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:36:40.406438 2140188 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:36:40.406507 2140188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:36:40.433714 2140188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:36:40.463735 2140188 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:36:40.463866 2140188 cli_runner.go:164] Run: docker network inspect addons-358443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:36:40.480998 2140188 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 10:36:40.485717 2140188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:36:40.499493 2140188 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:36:40.499561 2140188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:36:40.521408 2140188 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 10:36:40.521434 2140188 docker.go:594] Images already preloaded, skipping extraction
	I1002 10:36:40.521507 2140188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:36:40.542904 2140188 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 10:36:40.542932 2140188 cache_images.go:84] Images are preloaded, skipping loading
	I1002 10:36:40.543014 2140188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:36:40.610176 2140188 cni.go:84] Creating CNI manager for ""
	I1002 10:36:40.610202 2140188 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 10:36:40.610233 2140188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:36:40.610254 2140188 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-358443 NodeName:addons-358443 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:36:40.610392 2140188 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-358443"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:36:40.610465 2140188 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-358443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-358443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:36:40.610545 2140188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:36:40.621673 2140188 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:36:40.621761 2140188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 10:36:40.633414 2140188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 10:36:40.655200 2140188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:36:40.676124 2140188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1002 10:36:40.697035 2140188 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:36:40.701568 2140188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:36:40.715017 2140188 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443 for IP: 192.168.49.2
	I1002 10:36:40.715049 2140188 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:40.715801 2140188 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:36:41.075100 2140188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt ...
	I1002 10:36:41.075129 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt: {Name:mke1b0610c862d3f88ce79f72308f46474e057b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:41.075318 2140188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key ...
	I1002 10:36:41.075332 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key: {Name:mkb893be88f2c2b11bab44e1db972919b81524b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:41.076005 2140188 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:36:41.543239 2140188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt ...
	I1002 10:36:41.543268 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt: {Name:mkba414506e1db5d5a545e799d21f0eaf386d618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:41.543454 2140188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key ...
	I1002 10:36:41.543466 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key: {Name:mk65861e1a628794015e3066c5b0736b1dd42b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:41.543585 2140188 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.key
	I1002 10:36:41.543602 2140188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt with IP's: []
	I1002 10:36:41.664997 2140188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt ...
	I1002 10:36:41.665023 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: {Name:mkcb4e6e2f1175895e5a51168be631f97bdec0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:41.665199 2140188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.key ...
	I1002 10:36:41.665214 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.key: {Name:mkfe8871ea947c1d02fdb0f7cbf6cd2e31114a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:41.665320 2140188 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.key.dd3b5fb2
	I1002 10:36:41.665340 2140188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 10:36:42.976588 2140188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.crt.dd3b5fb2 ...
	I1002 10:36:42.976619 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.crt.dd3b5fb2: {Name:mkeb3250b7d42aa14ff678e6dc03dfe4ae16a08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:42.977337 2140188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.key.dd3b5fb2 ...
	I1002 10:36:42.977354 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.key.dd3b5fb2: {Name:mkb1c4ac20cc6ec522e8bd50d40198489e0ef57d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:42.977444 2140188 certs.go:337] copying /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.crt
	I1002 10:36:42.977516 2140188 certs.go:341] copying /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.key
	I1002 10:36:42.977565 2140188 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.key
	I1002 10:36:42.977585 2140188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.crt with IP's: []
	I1002 10:36:43.619344 2140188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.crt ...
	I1002 10:36:43.619378 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.crt: {Name:mkbe1f44dc2a111418aba897145d0a05a1f331ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:43.619576 2140188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.key ...
	I1002 10:36:43.619590 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.key: {Name:mk4c58982758e49edff38d600265b41c261c7bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:43.619806 2140188 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:36:43.619851 2140188 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:36:43.619894 2140188 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:36:43.619923 2140188 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:36:43.620599 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 10:36:43.650526 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 10:36:43.679185 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 10:36:43.708545 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 10:36:43.735664 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:36:43.763109 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:36:43.790527 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:36:43.817820 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:36:43.845335 2140188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:36:43.872601 2140188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 10:36:43.893064 2140188 ssh_runner.go:195] Run: openssl version
	I1002 10:36:43.900019 2140188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:36:43.911606 2140188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:36:43.916096 2140188 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:36:43.916194 2140188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:36:43.924415 2140188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:36:43.936189 2140188 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:36:43.940349 2140188 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:36:43.940394 2140188 kubeadm.go:404] StartCluster: {Name:addons-358443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-358443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:43.940532 2140188 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 10:36:43.960130 2140188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 10:36:43.971074 2140188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 10:36:43.982009 2140188 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 10:36:43.982116 2140188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 10:36:43.992720 2140188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 10:36:43.992764 2140188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 10:36:44.112858 2140188 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:36:44.192790 2140188 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 10:37:01.186505 2140188 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 10:37:01.186562 2140188 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 10:37:01.186644 2140188 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:37:01.186698 2140188 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:37:01.186732 2140188 kubeadm.go:322] OS: Linux
	I1002 10:37:01.186774 2140188 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 10:37:01.186820 2140188 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 10:37:01.186869 2140188 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 10:37:01.186915 2140188 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 10:37:01.186960 2140188 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 10:37:01.187005 2140188 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 10:37:01.187048 2140188 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1002 10:37:01.187093 2140188 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1002 10:37:01.187136 2140188 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1002 10:37:01.187203 2140188 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 10:37:01.187292 2140188 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 10:37:01.187377 2140188 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 10:37:01.187435 2140188 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 10:37:01.189752 2140188 out.go:204]   - Generating certificates and keys ...
	I1002 10:37:01.189860 2140188 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 10:37:01.189922 2140188 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 10:37:01.189985 2140188 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 10:37:01.190038 2140188 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 10:37:01.190101 2140188 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 10:37:01.190149 2140188 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 10:37:01.190199 2140188 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 10:37:01.190309 2140188 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-358443 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 10:37:01.190358 2140188 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 10:37:01.190465 2140188 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-358443 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 10:37:01.190526 2140188 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 10:37:01.190586 2140188 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 10:37:01.190631 2140188 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 10:37:01.190686 2140188 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 10:37:01.190734 2140188 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 10:37:01.190784 2140188 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 10:37:01.190844 2140188 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 10:37:01.190896 2140188 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 10:37:01.190971 2140188 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 10:37:01.191033 2140188 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 10:37:01.192819 2140188 out.go:204]   - Booting up control plane ...
	I1002 10:37:01.193000 2140188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 10:37:01.193115 2140188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 10:37:01.193187 2140188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 10:37:01.193306 2140188 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:37:01.193392 2140188 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:37:01.193432 2140188 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 10:37:01.193588 2140188 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 10:37:01.193666 2140188 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002497 seconds
	I1002 10:37:01.193777 2140188 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 10:37:01.193911 2140188 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 10:37:01.193971 2140188 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 10:37:01.194155 2140188 kubeadm.go:322] [mark-control-plane] Marking the node addons-358443 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 10:37:01.194212 2140188 kubeadm.go:322] [bootstrap-token] Using token: vukmji.s9d8umma45j4yrc3
	I1002 10:37:01.196329 2140188 out.go:204]   - Configuring RBAC rules ...
	I1002 10:37:01.196442 2140188 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 10:37:01.196521 2140188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 10:37:01.196712 2140188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 10:37:01.196830 2140188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 10:37:01.197008 2140188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 10:37:01.197132 2140188 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 10:37:01.197340 2140188 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 10:37:01.197397 2140188 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 10:37:01.197462 2140188 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 10:37:01.197472 2140188 kubeadm.go:322] 
	I1002 10:37:01.197530 2140188 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 10:37:01.197535 2140188 kubeadm.go:322] 
	I1002 10:37:01.197607 2140188 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 10:37:01.197612 2140188 kubeadm.go:322] 
	I1002 10:37:01.197636 2140188 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 10:37:01.197691 2140188 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 10:37:01.197739 2140188 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 10:37:01.197744 2140188 kubeadm.go:322] 
	I1002 10:37:01.197794 2140188 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 10:37:01.197799 2140188 kubeadm.go:322] 
	I1002 10:37:01.197853 2140188 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 10:37:01.197858 2140188 kubeadm.go:322] 
	I1002 10:37:01.197907 2140188 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 10:37:01.197977 2140188 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 10:37:01.198041 2140188 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 10:37:01.198045 2140188 kubeadm.go:322] 
	I1002 10:37:01.198125 2140188 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 10:37:01.198196 2140188 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 10:37:01.198202 2140188 kubeadm.go:322] 
	I1002 10:37:01.198281 2140188 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vukmji.s9d8umma45j4yrc3 \
	I1002 10:37:01.198383 2140188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d \
	I1002 10:37:01.198404 2140188 kubeadm.go:322] 	--control-plane 
	I1002 10:37:01.198409 2140188 kubeadm.go:322] 
	I1002 10:37:01.198488 2140188 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 10:37:01.198496 2140188 kubeadm.go:322] 
	I1002 10:37:01.198573 2140188 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vukmji.s9d8umma45j4yrc3 \
	I1002 10:37:01.198681 2140188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d 
	I1002 10:37:01.198690 2140188 cni.go:84] Creating CNI manager for ""
	I1002 10:37:01.198704 2140188 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 10:37:01.200629 2140188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 10:37:01.202596 2140188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 10:37:01.217144 2140188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 10:37:01.266583 2140188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 10:37:01.266728 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:01.266728 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=addons-358443 minikube.k8s.io/updated_at=2023_10_02T10_37_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:01.293536 2140188 ops.go:34] apiserver oom_adj: -16
	I1002 10:37:01.485689 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:01.719977 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:02.318135 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:02.817791 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:03.318497 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:03.817860 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:04.317782 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:04.818316 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:05.318503 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:05.818589 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:06.317794 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:06.817897 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:07.317799 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:07.818164 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:08.318340 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:08.817961 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:09.318388 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:09.818200 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:10.317816 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:10.818499 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:11.317990 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:11.818379 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:12.318747 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:12.818018 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:13.318043 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:13.818456 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:14.317932 2140188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:14.473472 2140188 kubeadm.go:1081] duration metric: took 13.206871546s to wait for elevateKubeSystemPrivileges.
	I1002 10:37:14.473500 2140188 kubeadm.go:406] StartCluster complete in 30.533108794s
	I1002 10:37:14.473516 2140188 settings.go:142] acquiring lock: {Name:mk7b49767935c15b5f90083e95558323a1cf0ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:14.473643 2140188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:37:14.474028 2140188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/kubeconfig: {Name:mk62f5c672074becc8cade8f73c1bedcd1d2907c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:14.476177 2140188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 10:37:14.476435 2140188 config.go:182] Loaded profile config "addons-358443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:37:14.476466 2140188 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1002 10:37:14.476530 2140188 addons.go:69] Setting volumesnapshots=true in profile "addons-358443"
	I1002 10:37:14.476543 2140188 addons.go:231] Setting addon volumesnapshots=true in "addons-358443"
	I1002 10:37:14.476575 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.476993 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.477708 2140188 addons.go:69] Setting ingress=true in profile "addons-358443"
	I1002 10:37:14.477727 2140188 addons.go:231] Setting addon ingress=true in "addons-358443"
	I1002 10:37:14.477769 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.478175 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.479091 2140188 addons.go:69] Setting ingress-dns=true in profile "addons-358443"
	I1002 10:37:14.479114 2140188 addons.go:231] Setting addon ingress-dns=true in "addons-358443"
	I1002 10:37:14.479177 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.479588 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.479882 2140188 addons.go:69] Setting cloud-spanner=true in profile "addons-358443"
	I1002 10:37:14.479898 2140188 addons.go:231] Setting addon cloud-spanner=true in "addons-358443"
	I1002 10:37:14.479937 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.480324 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.480623 2140188 addons.go:69] Setting inspektor-gadget=true in profile "addons-358443"
	I1002 10:37:14.480643 2140188 addons.go:231] Setting addon inspektor-gadget=true in "addons-358443"
	I1002 10:37:14.480672 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.481044 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.488316 2140188 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-358443"
	I1002 10:37:14.488386 2140188 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-358443"
	I1002 10:37:14.488437 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.488871 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.493515 2140188 addons.go:69] Setting metrics-server=true in profile "addons-358443"
	I1002 10:37:14.493544 2140188 addons.go:231] Setting addon metrics-server=true in "addons-358443"
	I1002 10:37:14.493588 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.494038 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.499576 2140188 addons.go:69] Setting default-storageclass=true in profile "addons-358443"
	I1002 10:37:14.499615 2140188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-358443"
	I1002 10:37:14.499940 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.510728 2140188 addons.go:69] Setting registry=true in profile "addons-358443"
	I1002 10:37:14.510766 2140188 addons.go:231] Setting addon registry=true in "addons-358443"
	I1002 10:37:14.510815 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.511292 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.511534 2140188 addons.go:69] Setting gcp-auth=true in profile "addons-358443"
	I1002 10:37:14.511554 2140188 mustload.go:65] Loading cluster: addons-358443
	I1002 10:37:14.511715 2140188 config.go:182] Loaded profile config "addons-358443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:37:14.511927 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.529353 2140188 addons.go:69] Setting storage-provisioner=true in profile "addons-358443"
	I1002 10:37:14.529387 2140188 addons.go:231] Setting addon storage-provisioner=true in "addons-358443"
	I1002 10:37:14.529433 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.529915 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.547203 2140188 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-358443"
	I1002 10:37:14.547237 2140188 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-358443"
	I1002 10:37:14.547577 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.671442 2140188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.0
	I1002 10:37:14.681583 2140188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 10:37:14.686025 2140188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 10:37:14.697871 2140188 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 10:37:14.698239 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1002 10:37:14.698419 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.726240 2140188 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1002 10:37:14.728709 2140188 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1002 10:37:14.728774 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 10:37:14.728876 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.743827 2140188 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1002 10:37:14.746944 2140188 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 10:37:14.747010 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1002 10:37:14.747107 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.758628 2140188 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1002 10:37:14.764029 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1002 10:37:14.764097 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1002 10:37:14.764195 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.766911 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 10:37:14.768766 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 10:37:14.774162 2140188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:37:14.778265 2140188 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:37:14.778329 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 10:37:14.778438 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.775248 2140188 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-358443"
	I1002 10:37:14.779262 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.779724 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.801120 2140188 addons.go:231] Setting addon default-storageclass=true in "addons-358443"
	I1002 10:37:14.801157 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.801728 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:14.807692 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 10:37:14.775305 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 10:37:14.820258 2140188 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1002 10:37:14.825305 2140188 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 10:37:14.825370 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 10:37:14.825477 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.842357 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 10:37:14.844991 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W1002 10:37:14.850228 2140188 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-358443" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1002 10:37:14.901374 2140188 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1002 10:37:14.901407 2140188 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 10:37:14.905422 2140188 out.go:177] * Verifying Kubernetes components...
	I1002 10:37:14.901654 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:14.901738 2140188 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 10:37:14.909385 2140188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:37:14.911526 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 10:37:14.911558 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 10:37:14.918516 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.919067 2140188 out.go:177]   - Using image docker.io/registry:2.8.1
	I1002 10:37:14.923781 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 10:37:14.925741 2140188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 10:37:14.924082 2140188 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1002 10:37:14.929210 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:14.929668 2140188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 10:37:14.929721 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 10:37:14.930569 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 10:37:14.930639 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.956667 2140188 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 10:37:14.956688 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1002 10:37:14.956753 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:14.967632 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.003625 2140188 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 10:37:15.003652 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 10:37:15.003719 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:15.041472 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.048139 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.050112 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.065650 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.068890 2140188 out.go:177]   - Using image docker.io/busybox:stable
	I1002 10:37:15.075181 2140188 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 10:37:15.077190 2140188 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 10:37:15.077215 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 10:37:15.077306 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:15.104419 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.116588 2140188 node_ready.go:35] waiting up to 6m0s for node "addons-358443" to be "Ready" ...
	I1002 10:37:15.130154 2140188 node_ready.go:49] node "addons-358443" has status "Ready":"True"
	I1002 10:37:15.130178 2140188 node_ready.go:38] duration metric: took 13.562433ms waiting for node "addons-358443" to be "Ready" ...
	I1002 10:37:15.130190 2140188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:37:15.133404 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.134489 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.158588 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.166448 2140188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:15.191193 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:15.723873 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:37:15.741980 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 10:37:15.802423 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 10:37:15.813026 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 10:37:15.813050 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 10:37:16.126884 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 10:37:16.186230 2140188 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 10:37:16.186292 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 10:37:16.229273 2140188 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 10:37:16.229301 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 10:37:16.278792 2140188 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 10:37:16.278815 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 10:37:16.325616 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1002 10:37:16.325639 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1002 10:37:16.339760 2140188 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 10:37:16.339793 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 10:37:16.341527 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 10:37:16.346084 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 10:37:16.346700 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 10:37:16.346716 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 10:37:16.373220 2140188 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 10:37:16.373264 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 10:37:16.635475 2140188 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 10:37:16.635499 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 10:37:16.696656 2140188 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 10:37:16.696683 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 10:37:16.707540 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 10:37:16.707573 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 10:37:16.751850 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1002 10:37:16.751874 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1002 10:37:16.761440 2140188 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 10:37:16.761464 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 10:37:16.958237 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 10:37:16.981304 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 10:37:16.981331 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 10:37:17.021198 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 10:37:17.083795 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 10:37:17.083822 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 10:37:17.157798 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1002 10:37:17.157867 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1002 10:37:17.225440 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:17.471208 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1002 10:37:17.471235 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1002 10:37:17.479357 2140188 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 10:37:17.479383 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 10:37:17.491634 2140188 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 10:37:17.491666 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 10:37:17.671359 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1002 10:37:17.671385 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1002 10:37:17.701421 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 10:37:17.720837 2140188 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 10:37:17.720861 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 10:37:18.018728 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 10:37:18.018794 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1002 10:37:18.025833 2140188 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 10:37:18.025920 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 10:37:18.181840 2140188 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 10:37:18.181925 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1002 10:37:18.225419 2140188 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 10:37:18.225497 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 10:37:18.446049 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 10:37:18.496695 2140188 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 10:37:18.496763 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 10:37:18.738998 2140188 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 10:37:18.739069 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 10:37:18.848749 2140188 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.918275469s)
	I1002 10:37:18.848817 2140188 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 10:37:18.885157 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 10:37:19.816018 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:20.605831 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.881919327s)
	I1002 10:37:21.561826 2140188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 10:37:21.561923 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:21.588978 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:22.150461 2140188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 10:37:22.214066 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:22.324954 2140188 addons.go:231] Setting addon gcp-auth=true in "addons-358443"
	I1002 10:37:22.325052 2140188 host.go:66] Checking if "addons-358443" exists ...
	I1002 10:37:22.325549 2140188 cli_runner.go:164] Run: docker container inspect addons-358443 --format={{.State.Status}}
	I1002 10:37:22.350233 2140188 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 10:37:22.350331 2140188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-358443
	I1002 10:37:22.382817 2140188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35490 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/addons-358443/id_rsa Username:docker}
	I1002 10:37:23.875896 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.073429651s)
	I1002 10:37:23.875969 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.749019227s)
	I1002 10:37:23.876004 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.534458302s)
	I1002 10:37:23.876198 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.530092403s)
	I1002 10:37:23.876313 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.918046534s)
	I1002 10:37:23.876327 2140188 addons.go:467] Verifying addon registry=true in "addons-358443"
	I1002 10:37:23.878371 2140188 out.go:177] * Verifying registry addon...
	I1002 10:37:23.876498 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.134491574s)
	I1002 10:37:23.876582 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.855354742s)
	I1002 10:37:23.876691 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.175240842s)
	I1002 10:37:23.876753 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.430630088s)
	I1002 10:37:23.881442 2140188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 10:37:23.881615 2140188 addons.go:467] Verifying addon ingress=true in "addons-358443"
	I1002 10:37:23.883695 2140188 out.go:177] * Verifying ingress addon...
	I1002 10:37:23.881732 2140188 addons.go:467] Verifying addon metrics-server=true in "addons-358443"
	W1002 10:37:23.881758 2140188 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 10:37:23.886041 2140188 retry.go:31] will retry after 237.866597ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 10:37:23.886835 2140188 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 10:37:23.903057 2140188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 10:37:23.903088 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 10:37:23.907791 2140188 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 10:37:23.909337 2140188 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 10:37:23.909356 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:23.914287 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:23.916130 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:24.124479 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 10:37:24.215947 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:24.438092 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:24.439078 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:24.935581 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:24.936777 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:25.435343 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:25.455072 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:25.648250 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.762996666s)
	I1002 10:37:25.648325 2140188 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-358443"
	I1002 10:37:25.650221 2140188 out.go:177] * Verifying csi-hostpath-driver addon...
	I1002 10:37:25.648544 2140188 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.298253181s)
	I1002 10:37:25.652993 2140188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 10:37:25.655041 2140188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 10:37:25.656863 2140188 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1002 10:37:25.658793 2140188 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 10:37:25.658842 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 10:37:25.666311 2140188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 10:37:25.666387 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:25.677534 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:25.767124 2140188 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 10:37:25.767214 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 10:37:25.848373 2140188 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 10:37:25.848405 2140188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1002 10:37:25.920527 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:25.936831 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:26.003883 2140188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 10:37:26.187118 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:26.429603 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:26.431141 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:26.479000 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.354419486s)
	I1002 10:37:26.684714 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:26.714628 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:26.919678 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:26.921505 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:27.183163 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:27.380398 2140188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.376467927s)
	I1002 10:37:27.382153 2140188 addons.go:467] Verifying addon gcp-auth=true in "addons-358443"
	I1002 10:37:27.385464 2140188 out.go:177] * Verifying gcp-auth addon...
	I1002 10:37:27.388501 2140188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 10:37:27.392443 2140188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 10:37:27.392473 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:27.399304 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:27.427101 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:27.427959 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:27.700537 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:27.904000 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:27.920789 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:27.925535 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:28.184423 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:28.404165 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:28.427785 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:28.428988 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:28.684253 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:28.903714 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:28.922784 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:28.923821 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:29.183513 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:29.213793 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:29.403684 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:29.426971 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:29.427900 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:29.684087 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:29.903478 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:29.919888 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:29.922757 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:30.183884 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:30.403798 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:30.426629 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:30.427787 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:30.683760 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:30.903000 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:30.920882 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:30.924441 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:31.183873 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:31.216703 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:31.403069 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:31.426927 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:31.434256 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:31.684498 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:31.903704 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:31.921801 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:31.924444 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:32.184625 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:32.403579 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:32.426968 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:32.431121 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:32.684376 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:32.903139 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:32.922563 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:32.924228 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:33.184068 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:33.403833 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:33.427204 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:33.427861 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:33.684833 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:33.714711 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:33.903586 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:33.919310 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:33.922902 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:34.183973 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:34.403714 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:34.429323 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:34.430744 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:34.684523 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:34.906452 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:34.930359 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:34.931845 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:35.184934 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:35.403593 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:35.426196 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:35.428720 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:35.684174 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:35.903757 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:35.922216 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:35.922792 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:36.184915 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:36.215366 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:36.403407 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:36.426404 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:36.426961 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:36.683264 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:36.902795 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:36.920474 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:36.921129 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:37.183508 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:37.403260 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:37.426097 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:37.426393 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:37.685878 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:37.903316 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:37.919379 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:37.922590 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:38.184694 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:38.402881 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:38.426088 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:38.431130 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:38.684366 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:38.715279 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:38.903233 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:38.921697 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:38.922772 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:39.185498 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:39.403268 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:39.423489 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:39.424380 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:39.684529 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:39.904802 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:39.925841 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:39.929108 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:40.197801 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:40.403849 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:40.436130 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:40.437379 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:40.686487 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:40.718712 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:40.904733 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:40.925082 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:40.926245 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:41.183762 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:41.404051 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:41.428691 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:41.429617 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:41.684668 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:41.903542 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:41.922063 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:41.924223 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:42.188610 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:42.403891 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:42.425051 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:42.428074 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:42.683760 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:42.902982 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:42.921464 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:42.921944 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:43.189947 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:43.213785 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:43.403717 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:43.425157 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:43.426592 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:43.694865 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:43.903792 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:43.920835 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:43.922562 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:44.183476 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:44.403198 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:44.426396 2140188 kapi.go:107] duration metric: took 20.54495107s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 10:37:44.426963 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:44.683069 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:44.904095 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:44.920497 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:45.184766 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:45.215832 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:45.403677 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:45.419494 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:45.682974 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:45.903106 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:45.919491 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:46.183426 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:46.404958 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:46.419653 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:46.684324 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:46.903093 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:46.920435 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:47.188732 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:47.406820 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:47.422628 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:47.685793 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:47.715052 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:47.903761 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:47.920609 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:48.183627 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:48.403749 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:48.424836 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:48.683314 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:48.903818 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:48.918960 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:49.186408 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:49.403129 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:49.419836 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:49.683964 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:49.903182 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:49.918826 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:50.188318 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:50.219651 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:50.403857 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:50.418270 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:50.684733 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:50.903475 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:50.918631 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:51.183419 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:51.403168 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:51.425642 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:51.683492 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:51.903478 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:51.921166 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:52.184665 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:52.403147 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:52.424915 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:52.685050 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:52.715169 2140188 pod_ready.go:102] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:52.903868 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:52.919088 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:53.196955 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:53.403275 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:53.418948 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:53.683731 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:53.906651 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:53.921200 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:54.183797 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:54.403581 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:54.425467 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:54.682889 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:54.905542 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:54.919399 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:55.188555 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:55.248885 2140188 pod_ready.go:92] pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:55.248917 2140188 pod_ready.go:81] duration metric: took 40.082394363s waiting for pod "coredns-5dd5756b68-c2xmj" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.248930 2140188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xvp8g" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.255521 2140188 pod_ready.go:92] pod "coredns-5dd5756b68-xvp8g" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:55.255546 2140188 pod_ready.go:81] duration metric: took 6.607506ms waiting for pod "coredns-5dd5756b68-xvp8g" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.255568 2140188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.262114 2140188 pod_ready.go:92] pod "etcd-addons-358443" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:55.262138 2140188 pod_ready.go:81] duration metric: took 6.562632ms waiting for pod "etcd-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.262150 2140188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.268275 2140188 pod_ready.go:92] pod "kube-apiserver-addons-358443" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:55.268302 2140188 pod_ready.go:81] duration metric: took 6.144401ms waiting for pod "kube-apiserver-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.268313 2140188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.274257 2140188 pod_ready.go:92] pod "kube-controller-manager-addons-358443" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:55.274282 2140188 pod_ready.go:81] duration metric: took 5.961617ms waiting for pod "kube-controller-manager-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.274294 2140188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khnvx" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.403424 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:55.426087 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:55.611850 2140188 pod_ready.go:92] pod "kube-proxy-khnvx" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:55.611869 2140188 pod_ready.go:81] duration metric: took 337.568043ms waiting for pod "kube-proxy-khnvx" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.611881 2140188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:55.690366 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:55.904030 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:55.919894 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:56.011686 2140188 pod_ready.go:92] pod "kube-scheduler-addons-358443" in "kube-system" namespace has status "Ready":"True"
	I1002 10:37:56.011710 2140188 pod_ready.go:81] duration metric: took 399.820707ms waiting for pod "kube-scheduler-addons-358443" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:56.011721 2140188 pod_ready.go:38] duration metric: took 40.88151968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:37:56.011748 2140188 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:37:56.011828 2140188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:37:56.042762 2140188 api_server.go:72] duration metric: took 41.141322608s to wait for apiserver process to appear ...
	I1002 10:37:56.042788 2140188 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:37:56.042819 2140188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 10:37:56.052825 2140188 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 10:37:56.054313 2140188 api_server.go:141] control plane version: v1.28.2
	I1002 10:37:56.054347 2140188 api_server.go:131] duration metric: took 11.552963ms to wait for apiserver health ...
	I1002 10:37:56.054357 2140188 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:37:56.184663 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:56.222673 2140188 system_pods.go:59] 17 kube-system pods found
	I1002 10:37:56.222707 2140188 system_pods.go:61] "coredns-5dd5756b68-c2xmj" [9c93e949-a5c6-4599-8b9e-24041adc9d94] Running
	I1002 10:37:56.222714 2140188 system_pods.go:61] "coredns-5dd5756b68-xvp8g" [a6e95575-5ab1-4953-89c4-748fbd669195] Running
	I1002 10:37:56.222723 2140188 system_pods.go:61] "csi-hostpath-attacher-0" [49713935-5183-4ed5-980e-05defb30a1a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 10:37:56.222731 2140188 system_pods.go:61] "csi-hostpath-resizer-0" [fe0e1581-5307-4cc1-9675-a67f2f49f255] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 10:37:56.222742 2140188 system_pods.go:61] "csi-hostpathplugin-5bmbl" [5cabddef-745e-4e1e-83d0-114ff481bd3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 10:37:56.222748 2140188 system_pods.go:61] "etcd-addons-358443" [02460971-4a17-4334-8f11-bf725e843d50] Running
	I1002 10:37:56.222754 2140188 system_pods.go:61] "kube-apiserver-addons-358443" [b0c85f7b-28f4-4af9-b158-0cad39cd1728] Running
	I1002 10:37:56.222759 2140188 system_pods.go:61] "kube-controller-manager-addons-358443" [5d16e637-4f05-4fba-af46-2c4d91d8cf8c] Running
	I1002 10:37:56.222767 2140188 system_pods.go:61] "kube-ingress-dns-minikube" [13858448-c0cb-4869-9045-9a8f955d081f] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 10:37:56.222776 2140188 system_pods.go:61] "kube-proxy-khnvx" [296ec7de-2dc2-4015-bccc-a56f7cf3f703] Running
	I1002 10:37:56.222782 2140188 system_pods.go:61] "kube-scheduler-addons-358443" [d3215fcd-ba97-4350-9652-e4960abe2fad] Running
	I1002 10:37:56.222794 2140188 system_pods.go:61] "metrics-server-7c66d45ddc-6k96t" [1ce4d173-fab0-4400-a21a-28781c10d1c9] Running
	I1002 10:37:56.222800 2140188 system_pods.go:61] "registry-77zwl" [eccb8ea5-8a7b-4635-ae3e-581e52d381b3] Running
	I1002 10:37:56.222805 2140188 system_pods.go:61] "registry-proxy-vtjvv" [9fdaa465-d280-4acb-926f-0390823f5a3a] Running
	I1002 10:37:56.222817 2140188 system_pods.go:61] "snapshot-controller-58dbcc7b99-brqz5" [05d0889f-58b5-45af-903b-cacc1f933a3c] Running
	I1002 10:37:56.222821 2140188 system_pods.go:61] "snapshot-controller-58dbcc7b99-zxbzd" [7f4d7993-243e-4511-9d43-312ccf205df2] Running
	I1002 10:37:56.222828 2140188 system_pods.go:61] "storage-provisioner" [6f0c5a89-4607-4975-a5af-03d4fe39f3ef] Running
	I1002 10:37:56.222836 2140188 system_pods.go:74] duration metric: took 168.473265ms to wait for pod list to return data ...
	I1002 10:37:56.222844 2140188 default_sa.go:34] waiting for default service account to be created ...
	I1002 10:37:56.403829 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:56.411134 2140188 default_sa.go:45] found service account: "default"
	I1002 10:37:56.411160 2140188 default_sa.go:55] duration metric: took 188.309214ms for default service account to be created ...
	I1002 10:37:56.411170 2140188 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 10:37:56.425002 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:56.624205 2140188 system_pods.go:86] 17 kube-system pods found
	I1002 10:37:56.624239 2140188 system_pods.go:89] "coredns-5dd5756b68-c2xmj" [9c93e949-a5c6-4599-8b9e-24041adc9d94] Running
	I1002 10:37:56.624246 2140188 system_pods.go:89] "coredns-5dd5756b68-xvp8g" [a6e95575-5ab1-4953-89c4-748fbd669195] Running
	I1002 10:37:56.624256 2140188 system_pods.go:89] "csi-hostpath-attacher-0" [49713935-5183-4ed5-980e-05defb30a1a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 10:37:56.624265 2140188 system_pods.go:89] "csi-hostpath-resizer-0" [fe0e1581-5307-4cc1-9675-a67f2f49f255] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 10:37:56.624273 2140188 system_pods.go:89] "csi-hostpathplugin-5bmbl" [5cabddef-745e-4e1e-83d0-114ff481bd3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 10:37:56.624329 2140188 system_pods.go:89] "etcd-addons-358443" [02460971-4a17-4334-8f11-bf725e843d50] Running
	I1002 10:37:56.624355 2140188 system_pods.go:89] "kube-apiserver-addons-358443" [b0c85f7b-28f4-4af9-b158-0cad39cd1728] Running
	I1002 10:37:56.624389 2140188 system_pods.go:89] "kube-controller-manager-addons-358443" [5d16e637-4f05-4fba-af46-2c4d91d8cf8c] Running
	I1002 10:37:56.624423 2140188 system_pods.go:89] "kube-ingress-dns-minikube" [13858448-c0cb-4869-9045-9a8f955d081f] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 10:37:56.624460 2140188 system_pods.go:89] "kube-proxy-khnvx" [296ec7de-2dc2-4015-bccc-a56f7cf3f703] Running
	I1002 10:37:56.624498 2140188 system_pods.go:89] "kube-scheduler-addons-358443" [d3215fcd-ba97-4350-9652-e4960abe2fad] Running
	I1002 10:37:56.624509 2140188 system_pods.go:89] "metrics-server-7c66d45ddc-6k96t" [1ce4d173-fab0-4400-a21a-28781c10d1c9] Running
	I1002 10:37:56.624565 2140188 system_pods.go:89] "registry-77zwl" [eccb8ea5-8a7b-4635-ae3e-581e52d381b3] Running
	I1002 10:37:56.624572 2140188 system_pods.go:89] "registry-proxy-vtjvv" [9fdaa465-d280-4acb-926f-0390823f5a3a] Running
	I1002 10:37:56.624578 2140188 system_pods.go:89] "snapshot-controller-58dbcc7b99-brqz5" [05d0889f-58b5-45af-903b-cacc1f933a3c] Running
	I1002 10:37:56.624583 2140188 system_pods.go:89] "snapshot-controller-58dbcc7b99-zxbzd" [7f4d7993-243e-4511-9d43-312ccf205df2] Running
	I1002 10:37:56.624588 2140188 system_pods.go:89] "storage-provisioner" [6f0c5a89-4607-4975-a5af-03d4fe39f3ef] Running
	I1002 10:37:56.624698 2140188 system_pods.go:126] duration metric: took 213.521075ms to wait for k8s-apps to be running ...
	I1002 10:37:56.624776 2140188 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:37:56.624844 2140188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:37:56.647009 2140188 system_svc.go:56] duration metric: took 22.222081ms WaitForService to wait for kubelet.
	I1002 10:37:56.647044 2140188 kubeadm.go:581] duration metric: took 41.74560041s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:37:56.647065 2140188 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:37:56.684231 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:56.812260 2140188 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:37:56.812293 2140188 node_conditions.go:123] node cpu capacity is 2
	I1002 10:37:56.812316 2140188 node_conditions.go:105] duration metric: took 165.241803ms to run NodePressure ...
	I1002 10:37:56.812327 2140188 start.go:228] waiting for startup goroutines ...
	I1002 10:37:56.903758 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:56.920208 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:57.185652 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:57.403495 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:57.426017 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:57.683936 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:57.903945 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:57.919623 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:58.185060 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:58.403867 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:58.420108 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:58.683985 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:58.903951 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:58.919386 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:59.183097 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:59.403767 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:59.423618 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:59.685052 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:59.903883 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:59.919125 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:00.184917 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:00.403952 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:00.430851 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:00.683953 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:00.903696 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:00.919258 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:01.184432 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:01.403996 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:01.418837 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:01.683548 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:01.903264 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:01.920909 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:02.184657 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:02.403494 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:02.419227 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:02.684788 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:02.903853 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:02.919766 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:03.184786 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:03.403630 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:03.424773 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:03.683781 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:03.903380 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:03.918846 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:04.183617 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:04.403314 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:04.419283 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:04.682775 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:04.903471 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:04.919231 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:05.186559 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:05.406168 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:05.425551 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:05.683744 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:05.903666 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:05.920709 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:06.184044 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:06.404153 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:06.424258 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:06.683849 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:06.903882 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:06.921026 2140188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:07.186097 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:07.403531 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:07.425485 2140188 kapi.go:107] duration metric: took 43.538645138s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 10:38:07.682533 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:07.903047 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:08.193371 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:08.403776 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:08.683975 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:08.903445 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:09.184057 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:09.404383 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:09.683773 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:09.903965 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:10.183917 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:10.403671 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:10.694341 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:10.902916 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:11.183582 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:11.403224 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:11.683884 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:11.906428 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:12.184525 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:12.403428 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:12.683599 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:12.906844 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:13.187218 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:13.402898 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:13.683822 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:13.903552 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:14.183055 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:14.403097 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:14.702868 2140188 kapi.go:107] duration metric: took 49.04987026s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 10:38:14.903543 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:15.402789 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:15.902939 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:16.403468 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:16.905885 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:17.402974 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:17.903160 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:18.403853 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:18.902696 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:19.403201 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:19.903968 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:20.403524 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:20.903194 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:21.403791 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:21.903582 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:22.403645 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:22.903935 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:23.402862 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:23.903642 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:24.403811 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:24.903645 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:25.402936 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:25.903290 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:26.403081 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:26.903657 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:27.403537 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:27.903424 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:28.403248 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:28.903007 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:29.403307 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:29.904985 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:30.402822 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:30.902880 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:31.402827 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:31.904103 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:32.402743 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:32.903823 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:33.403046 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:33.903124 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:34.403378 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:34.906766 2140188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:35.403006 2140188 kapi.go:107] duration metric: took 1m8.014510513s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 10:38:35.405534 2140188 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-358443 cluster.
	I1002 10:38:35.407292 2140188 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 10:38:35.408948 2140188 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 10:38:35.410946 2140188 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1002 10:38:35.412615 2140188 addons.go:502] enable addons completed in 1m20.936144935s: enabled=[storage-provisioner cloud-spanner ingress-dns inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1002 10:38:35.412649 2140188 start.go:233] waiting for cluster config update ...
	I1002 10:38:35.412666 2140188 start.go:242] writing updated cluster config ...
	I1002 10:38:35.412973 2140188 ssh_runner.go:195] Run: rm -f paused
	I1002 10:38:35.489627 2140188 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 10:38:35.491596 2140188 out.go:177] * Done! kubectl is now configured to use "addons-358443" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 02 10:39:19 addons-358443 dockerd[1102]: time="2023-10-02T10:39:19.994403502Z" level=info msg="ignoring event" container=2c03ef3a3875c6fb8235613207858f6de2bb50c69163548d2262b0847d991094 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:22 addons-358443 cri-dockerd[1313]: time="2023-10-02T10:39:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c674f3bb3c97edf3de1ffb7853275c8dec0fad359f286197076670b3de0c6d8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 10:39:22 addons-358443 cri-dockerd[1313]: time="2023-10-02T10:39:22Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Oct 02 10:39:29 addons-358443 dockerd[1102]: time="2023-10-02T10:39:29.797667964Z" level=info msg="ignoring event" container=44e4f931dc9773aaea00d38f9ffca84fe21fb7007639a21046fd09078d643507 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:29 addons-358443 dockerd[1102]: time="2023-10-02T10:39:29.909363592Z" level=info msg="ignoring event" container=9c674f3bb3c97edf3de1ffb7853275c8dec0fad359f286197076670b3de0c6d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.704968280Z" level=info msg="ignoring event" container=116eff28fc24f56102a0c890582ec2809d73cda800c6e57162b8ffc1f9aa3890 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.718430795Z" level=info msg="ignoring event" container=c3c96eadcf693be25e1c32a675e8d71dbdf56ec17c73aa0557e370f6d8a57267 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.748878541Z" level=info msg="ignoring event" container=00c8dfe6e510e799af87515da73955b94fdfa7b9f0dbaf388242a2d876924d29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.754351970Z" level=info msg="ignoring event" container=67c15ed5cdf2dc6c62b673b8bcbf15a3777a1f71b13a0641a399fad34b83ff85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.763790239Z" level=info msg="ignoring event" container=ded9fd56967d199795130c5db025fa31928e952af02760f0c818f80c32565cf3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.780207826Z" level=info msg="ignoring event" container=12ee499721314552cd3f421fd24ad06654481551a81da8f69c1955c1046acd4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.780254202Z" level=info msg="ignoring event" container=d4d7ac553471596d05d3fa50bd5043175644b8d496358b1648a49bf3ecc42b64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.791736671Z" level=info msg="ignoring event" container=900bc2f03ea409eb7ef1f82caa40834e0cb1fd8cd43935bee71f72d1428876d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:31 addons-358443 dockerd[1102]: time="2023-10-02T10:39:31.917655812Z" level=info msg="ignoring event" container=2258741a3eef9dd220dbbaebd00f48168292a01cb5150a0b9cbaef2f4c5dda07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:32 addons-358443 dockerd[1102]: time="2023-10-02T10:39:32.005177545Z" level=info msg="ignoring event" container=77f5fea1ee79b57bb7e1ffe02b07614d8a74cc6cbb25abc79112f3dcc0e3579d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:32 addons-358443 dockerd[1102]: time="2023-10-02T10:39:32.014426096Z" level=info msg="ignoring event" container=08d0e036324f8520eaab1913c8519260a87ca0ec1feb0d213133f202fa43e8e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:33 addons-358443 dockerd[1102]: time="2023-10-02T10:39:33.260215649Z" level=info msg="ignoring event" container=d563ecbf1210201cffd10af7c4da03f104a99481598555cafbd7309df28bd090 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:36 addons-358443 dockerd[1102]: time="2023-10-02T10:39:36.473694871Z" level=info msg="ignoring event" container=f55bf5be631941cea7d5262a3279f4447634e96b1962d56ec82692c12f3433ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:37 addons-358443 dockerd[1102]: time="2023-10-02T10:39:37.953082709Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=4360aa207761b3ae8013dff32f813e64223411baf5f3f6e6082cced6c397ee74
	Oct 02 10:39:38 addons-358443 dockerd[1102]: time="2023-10-02T10:39:38.037840892Z" level=info msg="ignoring event" container=4360aa207761b3ae8013dff32f813e64223411baf5f3f6e6082cced6c397ee74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:38 addons-358443 dockerd[1102]: time="2023-10-02T10:39:38.285286129Z" level=info msg="ignoring event" container=3e4cb1d6e9286fd3a820b6dba5d6f1d362bd98730622a415f5401b6ac4c8e0d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:38 addons-358443 dockerd[1102]: time="2023-10-02T10:39:38.388349111Z" level=info msg="ignoring event" container=29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:38 addons-358443 dockerd[1102]: time="2023-10-02T10:39:38.394433220Z" level=info msg="ignoring event" container=039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:38 addons-358443 dockerd[1102]: time="2023-10-02T10:39:38.526237601Z" level=info msg="ignoring event" container=b849dbec777eb454c94f80b27aa4ba5d16d16e895fc5ce23c4a61f4c20e54176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:39:38 addons-358443 dockerd[1102]: time="2023-10-02T10:39:38.581634310Z" level=info msg="ignoring event" container=61f9cc17823d61d5be01b1c1bb71047278a1dcc96f0d088cbe4dc56b4ca274a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f55bf5be63194       97e050c3e21e9                                                                                                                7 seconds ago        Exited              hello-world-app           2                   8398289e923d9       hello-world-app-5d77478584-jhhwn
	b8760f2805b1a       645adbf280ba8                                                                                                                29 seconds ago       Exited              cloud-spanner-emulator    4                   46ca7e2d5f2a7       cloud-spanner-emulator-7d49f968d9-6kxw5
	11f9995156c3c       nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                                                33 seconds ago       Running             nginx                     0                   f035decce844d       nginx
	011b700b0afe0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                  0                   230b61d83ddf2       gcp-auth-d4c87556c-7mxwn
	65718675a3579       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   About a minute ago   Exited              patch                     0                   41d838d937bf9       ingress-nginx-admission-patch-7qdwd
	d0bb27e875e24       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   About a minute ago   Exited              create                    0                   5abc6a83190bc       ingress-nginx-admission-create-fflmv
	b905d24ec2e05       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       About a minute ago   Running             local-path-provisioner    0                   6aad3b428f10c       local-path-provisioner-78b46b4d5c-8zrlp
	e3362d25e3d12       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   f0c7b6f70316b       storage-provisioner
	ce93bcd7c78d1       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   0                   b0106ab909aa4       coredns-5dd5756b68-xvp8g
	3c5ad545e017a       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   0                   4f70ece27763e       coredns-5dd5756b68-c2xmj
	71104ac6fcbf8       7da62c127fc0f                                                                                                                2 minutes ago        Running             kube-proxy                0                   ccf6b4e376d35       kube-proxy-khnvx
	13fc9c0af51eb       89d57b83c1786                                                                                                                2 minutes ago        Running             kube-controller-manager   0                   dac24f3ea0e03       kube-controller-manager-addons-358443
	9b37ea42e3f55       30bb499447fe1                                                                                                                2 minutes ago        Running             kube-apiserver            0                   66c3a356f734f       kube-apiserver-addons-358443
	4263addf775d6       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                      0                   0d9fbdd192d70       etcd-addons-358443
	1049f846380ac       64fc40cee3716                                                                                                                2 minutes ago        Running             kube-scheduler            0                   1504d11a3e03e       kube-scheduler-addons-358443
	
	* 
	* ==> coredns [3c5ad545e017] <==
	* [INFO] Reloading complete
	[INFO] 127.0.0.1:43470 - 62800 "HINFO IN 1562201118205074106.8413096429820984810. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020229465s
	[INFO] 10.244.0.7:38435 - 12610 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001603227s
	[INFO] 10.244.0.7:38435 - 1862 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144303s
	[INFO] 10.244.0.7:43091 - 31217 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122765s
	[INFO] 10.244.0.7:33926 - 16993 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089386s
	[INFO] 10.244.0.7:40197 - 62386 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096032s
	[INFO] 10.244.0.19:58680 - 35670 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000247105s
	[INFO] 10.244.0.19:45302 - 18056 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133833s
	[INFO] 10.244.0.19:49442 - 45553 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131536s
	[INFO] 10.244.0.19:49708 - 60093 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013979s
	[INFO] 10.244.0.19:51520 - 6965 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110252s
	[INFO] 10.244.0.19:58103 - 25115 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.0017238s
	[INFO] 10.244.0.19:42245 - 49035 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001847098s
	[INFO] 10.244.0.19:39309 - 32525 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000618549s
	[INFO] 10.244.0.20:45143 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000332757s
	[INFO] 10.244.0.18:52711 - 63907 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000250822s
	[INFO] 10.244.0.18:52711 - 58514 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000140651s
	[INFO] 10.244.0.18:52711 - 2634 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000351785s
	[INFO] 10.244.0.18:52711 - 34618 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129796s
	[INFO] 10.244.0.18:52711 - 59813 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00010203s
	[INFO] 10.244.0.18:52711 - 61951 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000103786s
	[INFO] 10.244.0.18:52711 - 61994 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001101386s
	[INFO] 10.244.0.18:52711 - 10402 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00250696s
	[INFO] 10.244.0.18:52711 - 48202 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000150301s
	
	* 
	* ==> coredns [ce93bcd7c78d] <==
	* [INFO] 10.244.0.18:56092 - 18303 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000133021s
	[INFO] 10.244.0.18:56092 - 9811 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000120623s
	[INFO] 10.244.0.18:56092 - 58699 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000113919s
	[INFO] 10.244.0.18:56092 - 9633 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000129189s
	[INFO] 10.244.0.18:52156 - 21981 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105239s
	[INFO] 10.244.0.18:52156 - 39136 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00029133s
	[INFO] 10.244.0.18:56092 - 57035 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002157652s
	[INFO] 10.244.0.18:52156 - 6654 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000195593s
	[INFO] 10.244.0.18:52156 - 51154 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047909s
	[INFO] 10.244.0.18:52156 - 34351 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062531s
	[INFO] 10.244.0.18:52156 - 58032 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00009284s
	[INFO] 10.244.0.18:56092 - 23378 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002115667s
	[INFO] 10.244.0.18:52156 - 35422 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001460662s
	[INFO] 10.244.0.18:56092 - 38290 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000180381s
	[INFO] 10.244.0.18:52156 - 54576 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001000118s
	[INFO] 10.244.0.18:52156 - 55049 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.0.18:37022 - 25116 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00030925s
	[INFO] 10.244.0.18:37022 - 43431 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075676s
	[INFO] 10.244.0.18:37022 - 22677 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062777s
	[INFO] 10.244.0.18:37022 - 45198 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059946s
	[INFO] 10.244.0.18:37022 - 25301 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046039s
	[INFO] 10.244.0.18:37022 - 3410 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048205s
	[INFO] 10.244.0.18:37022 - 54214 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002658295s
	[INFO] 10.244.0.18:37022 - 27487 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001261287s
	[INFO] 10.244.0.18:37022 - 27310 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078465s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-358443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-358443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=addons-358443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T10_37_01_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-358443
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-358443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:39:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:39:35 +0000   Mon, 02 Oct 2023 10:36:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:39:35 +0000   Mon, 02 Oct 2023 10:36:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:39:35 +0000   Mon, 02 Oct 2023 10:36:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:39:35 +0000   Mon, 02 Oct 2023 10:37:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-358443
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 654ddaca10534136a9c49d4d15b9ebb9
	  System UUID:                112f6013-3b31-4bd7-bc75-a88f61c03d3f
	  Boot ID:                    8f181a8e-95ee-4bd9-9704-e77c1ff4607b
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-7d49f968d9-6kxw5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  default                     hello-world-app-5d77478584-jhhwn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-d4c87556c-7mxwn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 coredns-5dd5756b68-c2xmj                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m29s
	  kube-system                 coredns-5dd5756b68-xvp8g                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m29s
	  kube-system                 etcd-addons-358443                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-apiserver-addons-358443               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-controller-manager-addons-358443      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-proxy-khnvx                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-addons-358443               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  local-path-storage          local-path-provisioner-78b46b4d5c-8zrlp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             240Mi (3%!)(MISSING)  340Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m50s)  kubelet          Node addons-358443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m50s)  kubelet          Node addons-358443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m50s)  kubelet          Node addons-358443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m42s                  kubelet          Node addons-358443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s                  kubelet          Node addons-358443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s                  kubelet          Node addons-358443 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m42s                  kubelet          Node addons-358443 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m42s                  kubelet          Node addons-358443 status is now: NodeReady
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m29s                  node-controller  Node addons-358443 event: Registered Node addons-358443 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001129] FS-Cache: O-key=[8] 'bf693b0000000000'
	[  +0.000794] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.001025] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000e355e878
	[  +0.001097] FS-Cache: N-key=[8] 'bf693b0000000000'
	[  +0.005635] FS-Cache: Duplicate cookie detected
	[  +0.000713] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001006] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=000000008248e2b6
	[  +0.001071] FS-Cache: O-key=[8] 'bf693b0000000000'
	[  +0.000757] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000332c84bc
	[  +0.001060] FS-Cache: N-key=[8] 'bf693b0000000000'
	[  +3.066474] FS-Cache: Duplicate cookie detected
	[  +0.000697] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001078] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=000000000adf6605
	[  +0.001038] FS-Cache: O-key=[8] 'be693b0000000000'
	[  +0.000702] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000e355e878
	[  +0.001293] FS-Cache: N-key=[8] 'be693b0000000000'
	[  +0.385473] FS-Cache: Duplicate cookie detected
	[  +0.000872] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001042] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=00000000d3ed889d
	[  +0.001173] FS-Cache: O-key=[8] 'c4693b0000000000'
	[  +0.000773] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000c70a5e8a
	[  +0.001151] FS-Cache: N-key=[8] 'c4693b0000000000'
	
	* 
	* ==> etcd [4263addf775d] <==
	* {"level":"info","ts":"2023-10-02T10:36:54.043795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-02T10:36:54.04387Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-02T10:36:54.046159Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-02T10:36:54.046179Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-02T10:36:54.046135Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T10:36:54.046893Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T10:36:54.046864Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T10:36:54.933299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T10:36:54.933391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T10:36:54.93344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-02T10:36:54.933485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T10:36:54.933516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-02T10:36:54.933548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-02T10:36:54.933586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-02T10:36:54.937428Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-358443 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T10:36:54.937512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:36:54.938548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T10:36:54.938674Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:36:54.939805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:36:54.940791Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-02T10:36:54.949358Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:36:54.94958Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:36:54.953313Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:36:54.9893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T10:36:54.989503Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [011b700b0afe] <==
	* 2023/10/02 10:38:34 GCP Auth Webhook started!
	2023/10/02 10:38:45 Ready to marshal response ...
	2023/10/02 10:38:45 Ready to write response ...
	2023/10/02 10:38:49 Ready to marshal response ...
	2023/10/02 10:38:49 Ready to write response ...
	2023/10/02 10:39:08 Ready to marshal response ...
	2023/10/02 10:39:08 Ready to write response ...
	2023/10/02 10:39:16 Ready to marshal response ...
	2023/10/02 10:39:16 Ready to write response ...
	2023/10/02 10:39:21 Ready to marshal response ...
	2023/10/02 10:39:21 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:39:43 up 18:22,  0 users,  load average: 2.12, 2.37, 2.09
	Linux addons-358443 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [9b37ea42e3f5] <==
	* I1002 10:39:02.293996       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1002 10:39:02.671693       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W1002 10:39:03.314725       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1002 10:39:07.879870       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 10:39:08.308844       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.241.163"}
	I1002 10:39:17.342952       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.139.250"}
	I1002 10:39:37.972889       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:37.972946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:37.981866       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:37.981921       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:37.994656       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:37.994695       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:38.011538       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:38.015103       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:38.079904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:38.079980       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:38.108201       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:38.109617       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:38.141613       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:38.141683       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:39:38.164210       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:39:38.164340       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 10:39:38.995076       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 10:39:39.164485       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 10:39:39.210026       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [13fc9c0af51e] <==
	* I1002 10:39:29.305685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="44.907µs"
	I1002 10:39:31.461373       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I1002 10:39:31.563380       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I1002 10:39:34.899319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-f6b66b4b9" duration="7.59µs"
	I1002 10:39:34.899812       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1002 10:39:34.915005       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1002 10:39:37.333768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.913µs"
	I1002 10:39:38.238086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.162µs"
	E1002 10:39:38.997133       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:39.166293       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:39.211945       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:39.872259       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:39.872295       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:40.020351       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:40.020393       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:40.423360       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:40.423399       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:40.464418       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:40.464451       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:42.138257       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:42.138294       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:42.568926       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:42.568960       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:39:43.073668       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:39:43.073701       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [71104ac6fcbf] <==
	* I1002 10:37:16.032428       1 server_others.go:69] "Using iptables proxy"
	I1002 10:37:16.059193       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1002 10:37:16.142144       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 10:37:16.146715       1 server_others.go:152] "Using iptables Proxier"
	I1002 10:37:16.146749       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 10:37:16.146756       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 10:37:16.146808       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 10:37:16.147037       1 server.go:846] "Version info" version="v1.28.2"
	I1002 10:37:16.147048       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:37:16.147984       1 config.go:188] "Starting service config controller"
	I1002 10:37:16.148038       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 10:37:16.148057       1 config.go:97] "Starting endpoint slice config controller"
	I1002 10:37:16.148061       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 10:37:16.148613       1 config.go:315] "Starting node config controller"
	I1002 10:37:16.148620       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 10:37:16.249373       1 shared_informer.go:318] Caches are synced for node config
	I1002 10:37:16.249394       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 10:37:16.249379       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [1049f846380a] <==
	* W1002 10:36:58.161373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 10:36:58.161409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 10:36:58.161538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:36:58.161566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 10:36:58.161524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:36:58.161608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 10:36:58.983960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 10:36:58.983998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 10:36:58.988686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:36:58.989397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 10:36:59.008431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 10:36:59.008681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 10:36:59.015421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 10:36:59.015622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 10:36:59.168264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:36:59.168304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 10:36:59.194959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:36:59.195083       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 10:36:59.197839       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:36:59.197900       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 10:36:59.252358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:36:59.252394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 10:36:59.329321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:36:59.329365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1002 10:37:02.148069       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 02 10:39:37 addons-358443 kubelet[2303]: E1002 10:39:37.312697    2303 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-jhhwn_default(2d2409b7-36d0-4ae0-8901-533b8921e27d)\"" pod="default/hello-world-app-5d77478584-jhhwn" podUID="2d2409b7-36d0-4ae0-8901-533b8921e27d"
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.434854    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb5dcf1d-6c39-4269-9592-80946f3cac61-webhook-cert\") pod \"bb5dcf1d-6c39-4269-9592-80946f3cac61\" (UID: \"bb5dcf1d-6c39-4269-9592-80946f3cac61\") "
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.435081    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v5d4\" (UniqueName: \"kubernetes.io/projected/bb5dcf1d-6c39-4269-9592-80946f3cac61-kube-api-access-7v5d4\") pod \"bb5dcf1d-6c39-4269-9592-80946f3cac61\" (UID: \"bb5dcf1d-6c39-4269-9592-80946f3cac61\") "
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.442683    2303 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5dcf1d-6c39-4269-9592-80946f3cac61-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "bb5dcf1d-6c39-4269-9592-80946f3cac61" (UID: "bb5dcf1d-6c39-4269-9592-80946f3cac61"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.449704    2303 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5dcf1d-6c39-4269-9592-80946f3cac61-kube-api-access-7v5d4" (OuterVolumeSpecName: "kube-api-access-7v5d4") pod "bb5dcf1d-6c39-4269-9592-80946f3cac61" (UID: "bb5dcf1d-6c39-4269-9592-80946f3cac61"). InnerVolumeSpecName "kube-api-access-7v5d4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.536190    2303 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7v5d4\" (UniqueName: \"kubernetes.io/projected/bb5dcf1d-6c39-4269-9592-80946f3cac61-kube-api-access-7v5d4\") on node \"addons-358443\" DevicePath \"\""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.536234    2303 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb5dcf1d-6c39-4269-9592-80946f3cac61-webhook-cert\") on node \"addons-358443\" DevicePath \"\""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.636687    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmjp9\" (UniqueName: \"kubernetes.io/projected/05d0889f-58b5-45af-903b-cacc1f933a3c-kube-api-access-rmjp9\") pod \"05d0889f-58b5-45af-903b-cacc1f933a3c\" (UID: \"05d0889f-58b5-45af-903b-cacc1f933a3c\") "
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.639295    2303 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d0889f-58b5-45af-903b-cacc1f933a3c-kube-api-access-rmjp9" (OuterVolumeSpecName: "kube-api-access-rmjp9") pod "05d0889f-58b5-45af-903b-cacc1f933a3c" (UID: "05d0889f-58b5-45af-903b-cacc1f933a3c"). InnerVolumeSpecName "kube-api-access-rmjp9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.737717    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xp9\" (UniqueName: \"kubernetes.io/projected/7f4d7993-243e-4511-9d43-312ccf205df2-kube-api-access-x6xp9\") pod \"7f4d7993-243e-4511-9d43-312ccf205df2\" (UID: \"7f4d7993-243e-4511-9d43-312ccf205df2\") "
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.737810    2303 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rmjp9\" (UniqueName: \"kubernetes.io/projected/05d0889f-58b5-45af-903b-cacc1f933a3c-kube-api-access-rmjp9\") on node \"addons-358443\" DevicePath \"\""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.741487    2303 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4d7993-243e-4511-9d43-312ccf205df2-kube-api-access-x6xp9" (OuterVolumeSpecName: "kube-api-access-x6xp9") pod "7f4d7993-243e-4511-9d43-312ccf205df2" (UID: "7f4d7993-243e-4511-9d43-312ccf205df2"). InnerVolumeSpecName "kube-api-access-x6xp9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 10:39:38 addons-358443 kubelet[2303]: I1002 10:39:38.838808    2303 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x6xp9\" (UniqueName: \"kubernetes.io/projected/7f4d7993-243e-4511-9d43-312ccf205df2-kube-api-access-x6xp9\") on node \"addons-358443\" DevicePath \"\""
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.297150    2303 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bb5dcf1d-6c39-4269-9592-80946f3cac61" path="/var/lib/kubelet/pods/bb5dcf1d-6c39-4269-9592-80946f3cac61/volumes"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.402074    2303 scope.go:117] "RemoveContainer" containerID="039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.440478    2303 scope.go:117] "RemoveContainer" containerID="039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: E1002 10:39:39.444585    2303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce" containerID="039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.444633    2303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce"} err="failed to get container status \"039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce\": rpc error: code = Unknown desc = Error response from daemon: No such container: 039452c900e108303197c739e66dbc63e69c4f7575d5bb48a3d8999f696d09ce"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.444647    2303 scope.go:117] "RemoveContainer" containerID="4360aa207761b3ae8013dff32f813e64223411baf5f3f6e6082cced6c397ee74"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.466168    2303 scope.go:117] "RemoveContainer" containerID="29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.481791    2303 scope.go:117] "RemoveContainer" containerID="29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: E1002 10:39:39.482715    2303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3" containerID="29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3"
	Oct 02 10:39:39 addons-358443 kubelet[2303]: I1002 10:39:39.482760    2303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3"} err="failed to get container status \"29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3\": rpc error: code = Unknown desc = Error response from daemon: No such container: 29c87e16638fa779162e61aa77f6fdc275a39ef48a1890fb4c74026413797bf3"
	Oct 02 10:39:41 addons-358443 kubelet[2303]: I1002 10:39:41.296602    2303 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="05d0889f-58b5-45af-903b-cacc1f933a3c" path="/var/lib/kubelet/pods/05d0889f-58b5-45af-903b-cacc1f933a3c/volumes"
	Oct 02 10:39:41 addons-358443 kubelet[2303]: I1002 10:39:41.296988    2303 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7f4d7993-243e-4511-9d43-312ccf205df2" path="/var/lib/kubelet/pods/7f4d7993-243e-4511-9d43-312ccf205df2/volumes"
	
	* 
	* ==> storage-provisioner [e3362d25e3d1] <==
	* I1002 10:37:22.354179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:37:22.430105       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:37:22.436743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:37:22.444854       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:37:22.446237       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-358443_9bd4b309-b1c0-4670-b7ec-7d15e900102d!
	I1002 10:37:22.447648       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c83b18bc-095e-493b-a1d3-afcb935bbcd2", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-358443_9bd4b309-b1c0-4670-b7ec-7d15e900102d became leader
	I1002 10:37:22.546844       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-358443_9bd4b309-b1c0-4670-b7ec-7d15e900102d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-358443 -n addons-358443
helpers_test.go:261: (dbg) Run:  kubectl --context addons-358443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (51.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-566627 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-566627 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.823003664s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-566627 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-566627 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7d301532-43cf-444d-8cee-3e3f8881cfa8] Pending
helpers_test.go:344: "nginx" [7d301532-43cf-444d-8cee-3e3f8881cfa8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7d301532-43cf-444d-8cee-3e3f8881cfa8] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.036014583s
addons_test.go:240: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-566627 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.017112399s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons disable ingress-dns --alsologtostderr -v=1: (3.297998501s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons disable ingress --alsologtostderr -v=1: (7.497148071s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-566627
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-566627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9880d5c044141c07968b32db7adef9804b56dfb0a2e8e702763e3d71d9402761",
	        "Created": "2023-10-02T10:46:13.563906618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2187005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T10:46:13.903077652Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/9880d5c044141c07968b32db7adef9804b56dfb0a2e8e702763e3d71d9402761/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9880d5c044141c07968b32db7adef9804b56dfb0a2e8e702763e3d71d9402761/hostname",
	        "HostsPath": "/var/lib/docker/containers/9880d5c044141c07968b32db7adef9804b56dfb0a2e8e702763e3d71d9402761/hosts",
	        "LogPath": "/var/lib/docker/containers/9880d5c044141c07968b32db7adef9804b56dfb0a2e8e702763e3d71d9402761/9880d5c044141c07968b32db7adef9804b56dfb0a2e8e702763e3d71d9402761-json.log",
	        "Name": "/ingress-addon-legacy-566627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-566627:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-566627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/78a95cc63f2dd2cb2f0d1f9fd758511edf3462738d92b06841a628d34b9645d8-init/diff:/var/lib/docker/overlay2/1d88af17a205d2819b1e281e265595a32e0f15f4f368d2227a6ad399b77d9a22/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78a95cc63f2dd2cb2f0d1f9fd758511edf3462738d92b06841a628d34b9645d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78a95cc63f2dd2cb2f0d1f9fd758511edf3462738d92b06841a628d34b9645d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78a95cc63f2dd2cb2f0d1f9fd758511edf3462738d92b06841a628d34b9645d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-566627",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-566627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-566627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-566627",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-566627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b46999314fd47a3f5c05c9866120fa8e929bc6dad652e30c7602cc530837eb71",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35506"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35507"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b46999314fd4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-566627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9880d5c04414",
	                        "ingress-addon-legacy-566627"
	                    ],
	                    "NetworkID": "0bc0ef77360a8c40b4446876dfe810628fed9a709db6bb0c012ef66395a3c824",
	                    "EndpointID": "09cff1777b7662414e730817bb479746eebf3cb28c48b15e96238e80fc28504c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-566627 -n ingress-addon-legacy-566627
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-566627 logs -n 25: (1.000442973s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-499029                     | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-499029 ssh pgrep              | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-499029 image build -t         | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | localhost/my-image:functional-499029     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-499029                        | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-499029 image ls               | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	| delete         | -p functional-499029                     | functional-499029           | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	| start          | -p image-182912                          | image-182912                | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | --driver=docker                          |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-182912                | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-182912                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-182912                | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-182912                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-182912                | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-182912                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-182912                | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-182912                          |                             |         |         |                     |                     |
	| delete         | -p image-182912                          | image-182912                | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:45 UTC |
	| start          | -p ingress-addon-legacy-566627           | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:45 UTC | 02 Oct 23 10:47 UTC |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                     |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-566627              | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:47 UTC | 02 Oct 23 10:47 UTC |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-566627              | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:47 UTC | 02 Oct 23 10:47 UTC |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-566627              | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:47 UTC | 02 Oct 23 10:47 UTC |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-566627 ip           | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:47 UTC | 02 Oct 23 10:47 UTC |
	| addons         | ingress-addon-legacy-566627              | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:48 UTC | 02 Oct 23 10:48 UTC |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-566627              | ingress-addon-legacy-566627 | jenkins | v1.31.2 | 02 Oct 23 10:48 UTC | 02 Oct 23 10:48 UTC |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:45:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:45:57.115689 2186539 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:45:57.115831 2186539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:45:57.115848 2186539 out.go:309] Setting ErrFile to fd 2...
	I1002 10:45:57.115854 2186539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:45:57.116148 2186539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:45:57.116628 2186539 out.go:303] Setting JSON to false
	I1002 10:45:57.117760 2186539 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":66504,"bootTime":1696177053,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:45:57.117838 2186539 start.go:138] virtualization:  
	I1002 10:45:57.120971 2186539 out.go:177] * [ingress-addon-legacy-566627] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:45:57.123138 2186539 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:45:57.124997 2186539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:45:57.123307 2186539 notify.go:220] Checking for updates...
	I1002 10:45:57.128854 2186539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:45:57.130753 2186539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:45:57.132409 2186539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 10:45:57.134432 2186539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:45:57.136668 2186539 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:45:57.165105 2186539 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:45:57.165263 2186539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:45:57.260283 2186539 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 10:45:57.250724001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:45:57.260424 2186539 docker.go:294] overlay module found
	I1002 10:45:57.263762 2186539 out.go:177] * Using the docker driver based on user configuration
	I1002 10:45:57.265599 2186539 start.go:298] selected driver: docker
	I1002 10:45:57.265616 2186539 start.go:902] validating driver "docker" against <nil>
	I1002 10:45:57.265634 2186539 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:45:57.266222 2186539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:45:57.332079 2186539 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 10:45:57.321757225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:45:57.332254 2186539 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:45:57.332520 2186539 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:45:57.334502 2186539 out.go:177] * Using Docker driver with root privileges
	I1002 10:45:57.336237 2186539 cni.go:84] Creating CNI manager for ""
	I1002 10:45:57.336260 2186539 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 10:45:57.336272 2186539 start_flags.go:321] config:
	{Name:ingress-addon-legacy-566627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-566627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:45:57.338188 2186539 out.go:177] * Starting control plane node ingress-addon-legacy-566627 in cluster ingress-addon-legacy-566627
	I1002 10:45:57.339825 2186539 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:45:57.341788 2186539 out.go:177] * Pulling base image ...
	I1002 10:45:57.343489 2186539 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 10:45:57.343565 2186539 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:45:57.360030 2186539 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:45:57.360051 2186539 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:45:57.417263 2186539 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1002 10:45:57.417286 2186539 cache.go:57] Caching tarball of preloaded images
	I1002 10:45:57.417436 2186539 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 10:45:57.419611 2186539 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1002 10:45:57.421326 2186539 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1002 10:45:57.534821 2186539 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1002 10:46:06.315034 2186539 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1002 10:46:06.315138 2186539 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1002 10:46:07.419485 2186539 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1002 10:46:07.419864 2186539 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/config.json ...
	I1002 10:46:07.419902 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/config.json: {Name:mk82f9a3987487a7ee8968191e26ec27dec23fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:07.420087 2186539 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:46:07.420146 2186539 start.go:365] acquiring machines lock for ingress-addon-legacy-566627: {Name:mk9939afd9257f8321ee791b23ed88d83d530bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:46:07.420206 2186539 start.go:369] acquired machines lock for "ingress-addon-legacy-566627" in 43.831µs
	I1002 10:46:07.420230 2186539 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-566627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-566627 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 10:46:07.420297 2186539 start.go:125] createHost starting for "" (driver="docker")
	I1002 10:46:07.422438 2186539 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 10:46:07.422698 2186539 start.go:159] libmachine.API.Create for "ingress-addon-legacy-566627" (driver="docker")
	I1002 10:46:07.422738 2186539 client.go:168] LocalClient.Create starting
	I1002 10:46:07.422810 2186539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem
	I1002 10:46:07.422852 2186539 main.go:141] libmachine: Decoding PEM data...
	I1002 10:46:07.422871 2186539 main.go:141] libmachine: Parsing certificate...
	I1002 10:46:07.422927 2186539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem
	I1002 10:46:07.422950 2186539 main.go:141] libmachine: Decoding PEM data...
	I1002 10:46:07.422965 2186539 main.go:141] libmachine: Parsing certificate...
	I1002 10:46:07.423306 2186539 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-566627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 10:46:07.440211 2186539 cli_runner.go:211] docker network inspect ingress-addon-legacy-566627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 10:46:07.440298 2186539 network_create.go:281] running [docker network inspect ingress-addon-legacy-566627] to gather additional debugging logs...
	I1002 10:46:07.440318 2186539 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-566627
	W1002 10:46:07.457364 2186539 cli_runner.go:211] docker network inspect ingress-addon-legacy-566627 returned with exit code 1
	I1002 10:46:07.457398 2186539 network_create.go:284] error running [docker network inspect ingress-addon-legacy-566627]: docker network inspect ingress-addon-legacy-566627: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-566627 not found
	I1002 10:46:07.457412 2186539 network_create.go:286] output of [docker network inspect ingress-addon-legacy-566627]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-566627 not found
	
	** /stderr **
	I1002 10:46:07.457480 2186539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:46:07.476268 2186539 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000e4d500}
	I1002 10:46:07.476306 2186539 network_create.go:123] attempt to create docker network ingress-addon-legacy-566627 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 10:46:07.476364 2186539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-566627 ingress-addon-legacy-566627
	I1002 10:46:07.547833 2186539 network_create.go:107] docker network ingress-addon-legacy-566627 192.168.49.0/24 created
	I1002 10:46:07.547865 2186539 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-566627" container
	I1002 10:46:07.547947 2186539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 10:46:07.574997 2186539 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-566627 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-566627 --label created_by.minikube.sigs.k8s.io=true
	I1002 10:46:07.595536 2186539 oci.go:103] Successfully created a docker volume ingress-addon-legacy-566627
	I1002 10:46:07.595619 2186539 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-566627-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-566627 --entrypoint /usr/bin/test -v ingress-addon-legacy-566627:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 10:46:08.913595 2186539 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-566627-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-566627 --entrypoint /usr/bin/test -v ingress-addon-legacy-566627:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.317926136s)
	I1002 10:46:08.913625 2186539 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-566627
	I1002 10:46:08.913643 2186539 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 10:46:08.913664 2186539 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 10:46:08.913754 2186539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-566627:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 10:46:13.481087 2186539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-566627:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.567284996s)
	I1002 10:46:13.481120 2186539 kic.go:199] duration metric: took 4.567454 seconds to extract preloaded images to volume
	W1002 10:46:13.481287 2186539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 10:46:13.481397 2186539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 10:46:13.547730 2186539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-566627 --name ingress-addon-legacy-566627 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-566627 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-566627 --network ingress-addon-legacy-566627 --ip 192.168.49.2 --volume ingress-addon-legacy-566627:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 10:46:13.911095 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Running}}
	I1002 10:46:13.939760 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Status}}
	I1002 10:46:13.969427 2186539 cli_runner.go:164] Run: docker exec ingress-addon-legacy-566627 stat /var/lib/dpkg/alternatives/iptables
	I1002 10:46:14.045321 2186539 oci.go:144] the created container "ingress-addon-legacy-566627" has a running status.
	I1002 10:46:14.045347 2186539 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa...
	I1002 10:46:14.358600 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 10:46:14.358710 2186539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 10:46:14.391838 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Status}}
	I1002 10:46:14.426312 2186539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 10:46:14.426331 2186539 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-566627 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 10:46:14.496223 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Status}}
	I1002 10:46:14.516709 2186539 machine.go:88] provisioning docker machine ...
	I1002 10:46:14.516743 2186539 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-566627"
	I1002 10:46:14.516814 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:14.535391 2186539 main.go:141] libmachine: Using SSH client type: native
	I1002 10:46:14.535837 2186539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35510 <nil> <nil>}
	I1002 10:46:14.535860 2186539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-566627 && echo "ingress-addon-legacy-566627" | sudo tee /etc/hostname
	I1002 10:46:14.536559 2186539 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 10:46:17.687632 2186539 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-566627
	
	I1002 10:46:17.687718 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:17.706703 2186539 main.go:141] libmachine: Using SSH client type: native
	I1002 10:46:17.707129 2186539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35510 <nil> <nil>}
	I1002 10:46:17.707158 2186539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-566627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-566627/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-566627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:46:17.842515 2186539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:46:17.842541 2186539 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:46:17.842560 2186539 ubuntu.go:177] setting up certificates
	I1002 10:46:17.842568 2186539 provision.go:83] configureAuth start
	I1002 10:46:17.842636 2186539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-566627
	I1002 10:46:17.861372 2186539 provision.go:138] copyHostCerts
	I1002 10:46:17.861412 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:46:17.861447 2186539 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:46:17.861459 2186539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:46:17.861551 2186539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:46:17.861637 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:46:17.861659 2186539 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:46:17.861663 2186539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:46:17.861691 2186539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:46:17.861739 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:46:17.861759 2186539 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:46:17.861763 2186539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:46:17.861788 2186539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:46:17.861842 2186539 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-566627 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-566627]
	I1002 10:46:18.251785 2186539 provision.go:172] copyRemoteCerts
	I1002 10:46:18.251852 2186539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:46:18.251893 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:18.278353 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:46:18.379875 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:46:18.379951 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:46:18.409266 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:46:18.409328 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1002 10:46:18.437204 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:46:18.437297 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:46:18.465057 2186539 provision.go:86] duration metric: configureAuth took 622.453195ms
	I1002 10:46:18.465122 2186539 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:46:18.465370 2186539 config.go:182] Loaded profile config "ingress-addon-legacy-566627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 10:46:18.465433 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:18.482592 2186539 main.go:141] libmachine: Using SSH client type: native
	I1002 10:46:18.483002 2186539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35510 <nil> <nil>}
	I1002 10:46:18.483019 2186539 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:46:18.623465 2186539 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:46:18.623501 2186539 ubuntu.go:71] root file system type: overlay
	I1002 10:46:18.623621 2186539 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:46:18.623690 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:18.645598 2186539 main.go:141] libmachine: Using SSH client type: native
	I1002 10:46:18.646034 2186539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35510 <nil> <nil>}
	I1002 10:46:18.646115 2186539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:46:18.800120 2186539 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:46:18.800201 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:18.820639 2186539 main.go:141] libmachine: Using SSH client type: native
	I1002 10:46:18.821057 2186539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35510 <nil> <nil>}
	I1002 10:46:18.821075 2186539 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:46:19.642521 2186539 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:29:57.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-02 10:46:18.793778560 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 10:46:19.642554 2186539 machine.go:91] provisioned docker machine in 5.125825758s
	I1002 10:46:19.642565 2186539 client.go:171] LocalClient.Create took 12.219812604s
	I1002 10:46:19.642578 2186539 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-566627" took 12.219878999s
	I1002 10:46:19.642588 2186539 start.go:300] post-start starting for "ingress-addon-legacy-566627" (driver="docker")
	I1002 10:46:19.642600 2186539 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:46:19.642669 2186539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:46:19.642713 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:19.662183 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:46:19.760352 2186539 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:46:19.764378 2186539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:46:19.764413 2186539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:46:19.764425 2186539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:46:19.764432 2186539 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:46:19.764443 2186539 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:46:19.764509 2186539 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:46:19.764595 2186539 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:46:19.764607 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:46:19.764705 2186539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:46:19.774974 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:46:19.803753 2186539 start.go:303] post-start completed in 161.1458ms
	I1002 10:46:19.804123 2186539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-566627
	I1002 10:46:19.823581 2186539 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/config.json ...
	I1002 10:46:19.823852 2186539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:46:19.823903 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:19.841883 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:46:19.939306 2186539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:46:19.944930 2186539 start.go:128] duration metric: createHost completed in 12.524612845s
	I1002 10:46:19.944953 2186539 start.go:83] releasing machines lock for "ingress-addon-legacy-566627", held for 12.524733853s
	I1002 10:46:19.945041 2186539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-566627
	I1002 10:46:19.962838 2186539 ssh_runner.go:195] Run: cat /version.json
	I1002 10:46:19.962888 2186539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:46:19.962958 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:19.962890 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:46:19.983366 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:46:19.989447 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:46:20.217186 2186539 ssh_runner.go:195] Run: systemctl --version
	I1002 10:46:20.222792 2186539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:46:20.228518 2186539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:46:20.258586 2186539 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:46:20.258663 2186539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1002 10:46:20.279710 2186539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1002 10:46:20.300147 2186539 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 10:46:20.300171 2186539 start.go:469] detecting cgroup driver to use...
	I1002 10:46:20.300205 2186539 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:46:20.300315 2186539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:46:20.320617 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1002 10:46:20.332932 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:46:20.345370 2186539 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:46:20.345464 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:46:20.357467 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:46:20.369747 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:46:20.381462 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:46:20.393666 2186539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:46:20.405331 2186539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:46:20.417408 2186539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:46:20.427809 2186539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:46:20.438169 2186539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:46:20.529914 2186539 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:46:20.649024 2186539 start.go:469] detecting cgroup driver to use...
	I1002 10:46:20.649070 2186539 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:46:20.649126 2186539 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:46:20.668739 2186539 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:46:20.668814 2186539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:46:20.684115 2186539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:46:20.705434 2186539 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:46:20.710958 2186539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:46:20.722878 2186539 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:46:20.754686 2186539 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:46:20.866533 2186539 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:46:20.967771 2186539 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:46:20.967882 2186539 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:46:20.994443 2186539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:46:21.096319 2186539 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:46:21.376764 2186539 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:46:21.403519 2186539 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:46:21.434494 2186539 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1002 10:46:21.434595 2186539 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-566627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:46:21.452548 2186539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 10:46:21.457008 2186539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:46:21.470323 2186539 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 10:46:21.470395 2186539 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:46:21.491060 2186539 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1002 10:46:21.491082 2186539 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1002 10:46:21.491141 2186539 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 10:46:21.502416 2186539 ssh_runner.go:195] Run: which lz4
	I1002 10:46:21.507720 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1002 10:46:21.507831 2186539 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 10:46:21.512526 2186539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 10:46:21.512562 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1002 10:46:23.505605 2186539 docker.go:628] Took 1.997824 seconds to copy over tarball
	I1002 10:46:23.505709 2186539 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 10:46:26.085340 2186539 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.579558563s)
	I1002 10:46:26.085373 2186539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 10:46:26.240389 2186539 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 10:46:26.251177 2186539 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1002 10:46:26.272929 2186539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:46:26.370430 2186539 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:46:28.721076 2186539 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.350603389s)
	I1002 10:46:28.721165 2186539 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:46:28.742344 2186539 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1002 10:46:28.742366 2186539 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1002 10:46:28.742374 2186539 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 10:46:28.743766 2186539 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:46:28.743951 2186539 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:46:28.744100 2186539 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 10:46:28.744159 2186539 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:46:28.744334 2186539 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:46:28.744396 2186539 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 10:46:28.744448 2186539 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1002 10:46:28.744610 2186539 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:46:28.744901 2186539 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 10:46:28.745731 2186539 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:46:28.745869 2186539 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:46:28.746165 2186539 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:46:28.746402 2186539 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:46:28.747206 2186539 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:46:28.747307 2186539 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 10:46:28.747552 2186539 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W1002 10:46:29.090726 2186539 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.091000 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:46:29.111503 2186539 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1002 10:46:29.111600 2186539 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:46:29.111685 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:46:29.133969 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W1002 10:46:29.161277 2186539 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.161582 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1002 10:46:29.178451 2186539 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.178681 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:46:29.180345 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1002 10:46:29.183410 2186539 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.183577 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1002 10:46:29.184353 2186539 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1002 10:46:29.184426 2186539 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 10:46:29.184473 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	W1002 10:46:29.194435 2186539 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.194607 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1002 10:46:29.224333 2186539 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.224545 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:46:29.249577 2186539 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1002 10:46:29.249660 2186539 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I1002 10:46:29.249729 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1002 10:46:29.249858 2186539 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1002 10:46:29.249897 2186539 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:46:29.249937 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:46:29.258465 2186539 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1002 10:46:29.258615 2186539 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I1002 10:46:29.258724 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1002 10:46:29.258846 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1002 10:46:29.258933 2186539 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1002 10:46:29.258992 2186539 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:46:29.259032 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:46:29.276871 2186539 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1002 10:46:29.276959 2186539 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:46:29.277039 2186539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:46:29.320169 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1002 10:46:29.320226 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1002 10:46:29.320267 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1002 10:46:29.320311 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1002 10:46:29.329887 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W1002 10:46:29.454719 2186539 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 10:46:29.454893 2186539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:46:29.476019 2186539 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 10:46:29.476067 2186539 docker.go:317] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:46:29.476129 2186539 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:46:29.509660 2186539 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 10:46:29.509784 2186539 cache_images.go:92] LoadImages completed in 767.397557ms
	W1002 10:46:29.509895 2186539 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1002 10:46:29.509990 2186539 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:46:29.572067 2186539 cni.go:84] Creating CNI manager for ""
	I1002 10:46:29.572089 2186539 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 10:46:29.572135 2186539 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:46:29.572162 2186539 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-566627 NodeName:ingress-addon-legacy-566627 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 10:46:29.572345 2186539 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-566627"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:46:29.572423 2186539 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-566627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-566627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:46:29.572502 2186539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1002 10:46:29.583404 2186539 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:46:29.583545 2186539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 10:46:29.594189 2186539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 10:46:29.617634 2186539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1002 10:46:29.640282 2186539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1002 10:46:29.662834 2186539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:46:29.667335 2186539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:46:29.681628 2186539 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627 for IP: 192.168.49.2
	I1002 10:46:29.681700 2186539 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:29.681860 2186539 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:46:29.681921 2186539 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:46:29.681976 2186539 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.key
	I1002 10:46:29.681990 2186539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt with IP's: []
	I1002 10:46:30.170933 2186539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt ...
	I1002 10:46:30.170965 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: {Name:mk62189a2b54e16b7f62df631b25c02be82e6ad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:30.171180 2186539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.key ...
	I1002 10:46:30.171194 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.key: {Name:mk78bb2c87df8991227737bdf0a8e6ca5f8c52e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:30.171287 2186539 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key.dd3b5fb2
	I1002 10:46:30.171309 2186539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 10:46:30.420418 2186539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt.dd3b5fb2 ...
	I1002 10:46:30.420448 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt.dd3b5fb2: {Name:mk53d36de5676421e5f9fdc656e176ea09cc63b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:30.420629 2186539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key.dd3b5fb2 ...
	I1002 10:46:30.420642 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key.dd3b5fb2: {Name:mk47d8d027bf1ce7ec76e02bfc83562f8c22110b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:30.420728 2186539 certs.go:337] copying /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt
	I1002 10:46:30.420805 2186539 certs.go:341] copying /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key
	I1002 10:46:30.420866 2186539 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.key
	I1002 10:46:30.420883 2186539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.crt with IP's: []
	I1002 10:46:31.167475 2186539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.crt ...
	I1002 10:46:31.167505 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.crt: {Name:mk78768088bd634d4c3108fd56549bcc9d4ad468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:31.167693 2186539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.key ...
	I1002 10:46:31.167706 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.key: {Name:mk81cda293a5bf23dd63b3d350a3ed2fc6ce0422 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:46:31.167792 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 10:46:31.167814 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 10:46:31.167826 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 10:46:31.167848 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 10:46:31.167859 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:46:31.167875 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:46:31.167891 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:46:31.167907 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:46:31.167961 2186539 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:46:31.168001 2186539 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:46:31.168016 2186539 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:46:31.168044 2186539 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:46:31.168072 2186539 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:46:31.168102 2186539 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:46:31.168155 2186539 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:46:31.168185 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:46:31.168205 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:46:31.168219 2186539 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:46:31.168847 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 10:46:31.200349 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 10:46:31.229713 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 10:46:31.258160 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 10:46:31.286272 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:46:31.314875 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:46:31.342417 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:46:31.370702 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:46:31.399222 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:46:31.427585 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:46:31.456453 2186539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:46:31.483856 2186539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 10:46:31.505130 2186539 ssh_runner.go:195] Run: openssl version
	I1002 10:46:31.512353 2186539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:46:31.524257 2186539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:46:31.528777 2186539 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:46:31.528867 2186539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:46:31.537139 2186539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:46:31.548402 2186539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:46:31.559654 2186539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:46:31.564316 2186539 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:46:31.564416 2186539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:46:31.572918 2186539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:46:31.584566 2186539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:46:31.595990 2186539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:46:31.600637 2186539 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:46:31.600729 2186539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:46:31.609198 2186539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:46:31.620865 2186539 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:46:31.625196 2186539 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:46:31.625247 2186539 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-566627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-566627 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:46:31.625383 2186539 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 10:46:31.646142 2186539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 10:46:31.656644 2186539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 10:46:31.666795 2186539 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 10:46:31.666886 2186539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 10:46:31.677335 2186539 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 10:46:31.677387 2186539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 10:46:31.733135 2186539 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 10:46:31.733416 2186539 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 10:46:31.958418 2186539 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:46:31.958488 2186539 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:46:31.958544 2186539 kubeadm.go:322] DOCKER_VERSION: 24.0.6
	I1002 10:46:31.958584 2186539 kubeadm.go:322] OS: Linux
	I1002 10:46:31.958630 2186539 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 10:46:31.958678 2186539 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 10:46:31.958726 2186539 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 10:46:31.958775 2186539 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 10:46:31.958824 2186539 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 10:46:31.958872 2186539 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 10:46:32.051241 2186539 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 10:46:32.051350 2186539 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 10:46:32.051444 2186539 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 10:46:32.261129 2186539 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:46:32.262706 2186539 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:46:32.262954 2186539 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 10:46:32.371529 2186539 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 10:46:32.374169 2186539 out.go:204]   - Generating certificates and keys ...
	I1002 10:46:32.374333 2186539 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 10:46:32.374426 2186539 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 10:46:32.723304 2186539 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 10:46:33.056751 2186539 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 10:46:34.288319 2186539 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 10:46:34.748079 2186539 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 10:46:35.119460 2186539 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 10:46:35.120281 2186539 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-566627 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 10:46:35.276959 2186539 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 10:46:35.277437 2186539 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-566627 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 10:46:35.831500 2186539 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 10:46:36.124748 2186539 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 10:46:36.507903 2186539 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 10:46:36.508501 2186539 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 10:46:36.788762 2186539 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 10:46:36.933221 2186539 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 10:46:38.029824 2186539 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 10:46:38.497743 2186539 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 10:46:38.498596 2186539 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 10:46:38.500923 2186539 out.go:204]   - Booting up control plane ...
	I1002 10:46:38.501031 2186539 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 10:46:38.515151 2186539 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 10:46:38.515235 2186539 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 10:46:38.515311 2186539 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 10:46:38.515730 2186539 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 10:46:51.019211 2186539 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502731 seconds
	I1002 10:46:51.019366 2186539 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 10:46:51.035875 2186539 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 10:46:51.562244 2186539 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 10:46:51.562467 2186539 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-566627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 10:46:52.071422 2186539 kubeadm.go:322] [bootstrap-token] Using token: bsy71c.z4gdatmjolmlzk8i
	I1002 10:46:52.073473 2186539 out.go:204]   - Configuring RBAC rules ...
	I1002 10:46:52.073600 2186539 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 10:46:52.079181 2186539 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 10:46:52.088582 2186539 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 10:46:52.092266 2186539 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 10:46:52.095351 2186539 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 10:46:52.098884 2186539 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 10:46:52.111679 2186539 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 10:46:52.519818 2186539 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 10:46:52.588781 2186539 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 10:46:52.589232 2186539 kubeadm.go:322] 
	I1002 10:46:52.589317 2186539 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 10:46:52.589327 2186539 kubeadm.go:322] 
	I1002 10:46:52.589400 2186539 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 10:46:52.589410 2186539 kubeadm.go:322] 
	I1002 10:46:52.589435 2186539 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 10:46:52.590102 2186539 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 10:46:52.590161 2186539 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 10:46:52.590174 2186539 kubeadm.go:322] 
	I1002 10:46:52.590224 2186539 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 10:46:52.590299 2186539 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 10:46:52.590381 2186539 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 10:46:52.590392 2186539 kubeadm.go:322] 
	I1002 10:46:52.590471 2186539 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 10:46:52.590552 2186539 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 10:46:52.590562 2186539 kubeadm.go:322] 
	I1002 10:46:52.590641 2186539 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token bsy71c.z4gdatmjolmlzk8i \
	I1002 10:46:52.590743 2186539 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d \
	I1002 10:46:52.590769 2186539 kubeadm.go:322]     --control-plane 
	I1002 10:46:52.590777 2186539 kubeadm.go:322] 
	I1002 10:46:52.590856 2186539 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 10:46:52.590873 2186539 kubeadm.go:322] 
	I1002 10:46:52.590953 2186539 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token bsy71c.z4gdatmjolmlzk8i \
	I1002 10:46:52.591054 2186539 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d 
	I1002 10:46:52.599541 2186539 kubeadm.go:322] W1002 10:46:31.732271    1656 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 10:46:52.599800 2186539 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1002 10:46:52.599967 2186539 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1002 10:46:52.600237 2186539 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:46:52.600348 2186539 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 10:46:52.600469 2186539 kubeadm.go:322] W1002 10:46:38.506241    1656 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 10:46:52.600596 2186539 kubeadm.go:322] W1002 10:46:38.508443    1656 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 10:46:52.600610 2186539 cni.go:84] Creating CNI manager for ""
	I1002 10:46:52.600630 2186539 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 10:46:52.600669 2186539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 10:46:52.600798 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:52.600879 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=ingress-addon-legacy-566627 minikube.k8s.io/updated_at=2023_10_02T10_46_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:53.163347 2186539 ops.go:34] apiserver oom_adj: -16
	I1002 10:46:53.163468 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:53.259425 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:53.868146 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:54.368436 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:54.869049 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:55.368328 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:55.868352 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:56.368737 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:56.868654 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:57.368917 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:57.869054 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:58.368871 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:58.868949 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:59.368291 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:46:59.868900 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:00.368942 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:00.868776 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:01.368197 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:01.868917 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:02.368529 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:02.868178 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:03.368908 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:03.868191 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:04.368650 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:04.868192 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:05.368627 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:05.869122 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:06.368104 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:06.868929 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:07.368790 2186539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:47:07.512616 2186539 kubeadm.go:1081] duration metric: took 14.91186209s to wait for elevateKubeSystemPrivileges.
	I1002 10:47:07.512642 2186539 kubeadm.go:406] StartCluster complete in 35.887400204s
	I1002 10:47:07.512659 2186539 settings.go:142] acquiring lock: {Name:mk7b49767935c15b5f90083e95558323a1cf0ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:47:07.512718 2186539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:47:07.513476 2186539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/kubeconfig: {Name:mk62f5c672074becc8cade8f73c1bedcd1d2907c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:47:07.513691 2186539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 10:47:07.513964 2186539 config.go:182] Loaded profile config "ingress-addon-legacy-566627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 10:47:07.514094 2186539 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 10:47:07.514166 2186539 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-566627"
	I1002 10:47:07.514182 2186539 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-566627"
	I1002 10:47:07.514235 2186539 host.go:66] Checking if "ingress-addon-legacy-566627" exists ...
	I1002 10:47:07.514248 2186539 kapi.go:59] client config for ingress-addon-legacy-566627: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:47:07.514711 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Status}}
	I1002 10:47:07.515171 2186539 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-566627"
	I1002 10:47:07.515188 2186539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-566627"
	I1002 10:47:07.515456 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Status}}
	I1002 10:47:07.515545 2186539 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 10:47:07.552883 2186539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:47:07.554775 2186539 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:47:07.554795 2186539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 10:47:07.554859 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:47:07.569295 2186539 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-566627" context rescaled to 1 replicas
	I1002 10:47:07.569339 2186539 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 10:47:07.571233 2186539 out.go:177] * Verifying Kubernetes components...
	I1002 10:47:07.573427 2186539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:47:07.585823 2186539 kapi.go:59] client config for ingress-addon-legacy-566627: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:47:07.586088 2186539 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-566627"
	I1002 10:47:07.586130 2186539 host.go:66] Checking if "ingress-addon-legacy-566627" exists ...
	I1002 10:47:07.586599 2186539 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-566627 --format={{.State.Status}}
	I1002 10:47:07.626081 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:47:07.640597 2186539 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 10:47:07.640619 2186539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 10:47:07.640682 2186539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-566627
	I1002 10:47:07.671430 2186539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35510 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/ingress-addon-legacy-566627/id_rsa Username:docker}
	I1002 10:47:07.779432 2186539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 10:47:07.780225 2186539 kapi.go:59] client config for ingress-addon-legacy-566627: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:47:07.780662 2186539 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-566627" to be "Ready" ...
	I1002 10:47:07.784646 2186539 node_ready.go:49] node "ingress-addon-legacy-566627" has status "Ready":"True"
	I1002 10:47:07.784704 2186539 node_ready.go:38] duration metric: took 3.997765ms waiting for node "ingress-addon-legacy-566627" to be "Ready" ...
	I1002 10:47:07.784750 2186539 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:47:07.792872 2186539 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:07.858304 2186539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 10:47:07.879259 2186539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:47:08.514934 2186539 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 10:47:08.592321 2186539 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1002 10:47:08.595065 2186539 addons.go:502] enable addons completed in 1.080961917s: enabled=[default-storageclass storage-provisioner]
	I1002 10:47:09.803883 2186539 pod_ready.go:102] pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace has status "Ready":"False"
	I1002 10:47:11.806382 2186539 pod_ready.go:102] pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace has status "Ready":"False"
	I1002 10:47:13.813274 2186539 pod_ready.go:102] pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace has status "Ready":"False"
	I1002 10:47:16.304857 2186539 pod_ready.go:102] pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace has status "Ready":"False"
	I1002 10:47:18.804986 2186539 pod_ready.go:102] pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace has status "Ready":"False"
	I1002 10:47:20.304527 2186539 pod_ready.go:92] pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace has status "Ready":"True"
	I1002 10:47:20.304556 2186539 pod_ready.go:81] duration metric: took 12.5116156s waiting for pod "coredns-66bff467f8-hmhrp" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.304571 2186539 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.309399 2186539 pod_ready.go:92] pod "etcd-ingress-addon-legacy-566627" in "kube-system" namespace has status "Ready":"True"
	I1002 10:47:20.309421 2186539 pod_ready.go:81] duration metric: took 4.842093ms waiting for pod "etcd-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.309432 2186539 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.314044 2186539 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-566627" in "kube-system" namespace has status "Ready":"True"
	I1002 10:47:20.314075 2186539 pod_ready.go:81] duration metric: took 4.63036ms waiting for pod "kube-apiserver-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.314088 2186539 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.318703 2186539 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-566627" in "kube-system" namespace has status "Ready":"True"
	I1002 10:47:20.318728 2186539 pod_ready.go:81] duration metric: took 4.631993ms waiting for pod "kube-controller-manager-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.318740 2186539 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxfb9" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.323391 2186539 pod_ready.go:92] pod "kube-proxy-rxfb9" in "kube-system" namespace has status "Ready":"True"
	I1002 10:47:20.323416 2186539 pod_ready.go:81] duration metric: took 4.669261ms waiting for pod "kube-proxy-rxfb9" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.323426 2186539 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.499870 2186539 request.go:629] Waited for 176.316134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-566627
	I1002 10:47:20.700144 2186539 request.go:629] Waited for 197.334916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-566627
	I1002 10:47:20.702774 2186539 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-566627" in "kube-system" namespace has status "Ready":"True"
	I1002 10:47:20.702798 2186539 pod_ready.go:81] duration metric: took 379.36431ms waiting for pod "kube-scheduler-ingress-addon-legacy-566627" in "kube-system" namespace to be "Ready" ...
	I1002 10:47:20.702809 2186539 pod_ready.go:38] duration metric: took 12.918031129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:47:20.702828 2186539 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:47:20.702895 2186539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:47:20.716792 2186539 api_server.go:72] duration metric: took 13.147419368s to wait for apiserver process to appear ...
	I1002 10:47:20.716813 2186539 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:47:20.716830 2186539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 10:47:20.725838 2186539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 10:47:20.726744 2186539 api_server.go:141] control plane version: v1.18.20
	I1002 10:47:20.726768 2186539 api_server.go:131] duration metric: took 9.947684ms to wait for apiserver health ...
	I1002 10:47:20.726778 2186539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:47:20.900134 2186539 request.go:629] Waited for 173.28913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:47:20.906210 2186539 system_pods.go:59] 7 kube-system pods found
	I1002 10:47:20.906248 2186539 system_pods.go:61] "coredns-66bff467f8-hmhrp" [706f8073-7440-4b46-8eec-9a4ece4fd4a0] Running
	I1002 10:47:20.906255 2186539 system_pods.go:61] "etcd-ingress-addon-legacy-566627" [d6307784-746f-4fba-8e60-fb6c025db080] Running
	I1002 10:47:20.906261 2186539 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-566627" [7a007cac-ce49-404d-a23c-9b64f34d827b] Running
	I1002 10:47:20.906289 2186539 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-566627" [c1ec1704-d5c0-41fb-a251-6515ac9b007b] Running
	I1002 10:47:20.906301 2186539 system_pods.go:61] "kube-proxy-rxfb9" [55ab3f06-1e6d-4011-a53e-51ae21c73292] Running
	I1002 10:47:20.906308 2186539 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-566627" [b7a47e40-65b4-4729-9619-d77afa964646] Running
	I1002 10:47:20.906313 2186539 system_pods.go:61] "storage-provisioner" [37c8d8e0-9659-44b5-9764-a36b6792a3a1] Running
	I1002 10:47:20.906325 2186539 system_pods.go:74] duration metric: took 179.539617ms to wait for pod list to return data ...
	I1002 10:47:20.906334 2186539 default_sa.go:34] waiting for default service account to be created ...
	I1002 10:47:21.099710 2186539 request.go:629] Waited for 193.265882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 10:47:21.102318 2186539 default_sa.go:45] found service account: "default"
	I1002 10:47:21.102354 2186539 default_sa.go:55] duration metric: took 196.006652ms for default service account to be created ...
	I1002 10:47:21.102367 2186539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 10:47:21.299828 2186539 request.go:629] Waited for 197.363938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:47:21.305862 2186539 system_pods.go:86] 7 kube-system pods found
	I1002 10:47:21.305895 2186539 system_pods.go:89] "coredns-66bff467f8-hmhrp" [706f8073-7440-4b46-8eec-9a4ece4fd4a0] Running
	I1002 10:47:21.305903 2186539 system_pods.go:89] "etcd-ingress-addon-legacy-566627" [d6307784-746f-4fba-8e60-fb6c025db080] Running
	I1002 10:47:21.305909 2186539 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-566627" [7a007cac-ce49-404d-a23c-9b64f34d827b] Running
	I1002 10:47:21.305929 2186539 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-566627" [c1ec1704-d5c0-41fb-a251-6515ac9b007b] Running
	I1002 10:47:21.305942 2186539 system_pods.go:89] "kube-proxy-rxfb9" [55ab3f06-1e6d-4011-a53e-51ae21c73292] Running
	I1002 10:47:21.305947 2186539 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-566627" [b7a47e40-65b4-4729-9619-d77afa964646] Running
	I1002 10:47:21.305955 2186539 system_pods.go:89] "storage-provisioner" [37c8d8e0-9659-44b5-9764-a36b6792a3a1] Running
	I1002 10:47:21.305963 2186539 system_pods.go:126] duration metric: took 203.590442ms to wait for k8s-apps to be running ...
	I1002 10:47:21.305974 2186539 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:47:21.306051 2186539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:47:21.322011 2186539 system_svc.go:56] duration metric: took 16.024551ms WaitForService to wait for kubelet.
	I1002 10:47:21.322034 2186539 kubeadm.go:581] duration metric: took 13.752670147s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:47:21.322072 2186539 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:47:21.500490 2186539 request.go:629] Waited for 178.319324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1002 10:47:21.503560 2186539 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:47:21.503595 2186539 node_conditions.go:123] node cpu capacity is 2
	I1002 10:47:21.503607 2186539 node_conditions.go:105] duration metric: took 181.522779ms to run NodePressure ...
	I1002 10:47:21.503640 2186539 start.go:228] waiting for startup goroutines ...
	I1002 10:47:21.503658 2186539 start.go:233] waiting for cluster config update ...
	I1002 10:47:21.503670 2186539 start.go:242] writing updated cluster config ...
	I1002 10:47:21.503985 2186539 ssh_runner.go:195] Run: rm -f paused
	I1002 10:47:21.562731 2186539 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1002 10:47:21.565146 2186539 out.go:177] 
	W1002 10:47:21.566797 2186539 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1002 10:47:21.568582 2186539 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1002 10:47:21.570383 2186539 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-566627" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 02 10:46:28 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:46:28.695073123Z" level=info msg="Daemon has completed initialization"
	Oct 02 10:46:28 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:46:28.718658048Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 02 10:46:28 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:46:28.718786219Z" level=info msg="API listen on [::]:2376"
	Oct 02 10:46:28 ingress-addon-legacy-566627 systemd[1]: Started Docker Application Container Engine.
	Oct 02 10:47:23 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:23.089356351Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Oct 02 10:47:24 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:24.658593052Z" level=info msg="ignoring event" container=6829356df5780e70465079073a117adf6c33fbaea26ef53c2439e2005420f8de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:24 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:24.683107472Z" level=info msg="ignoring event" container=4cf748f7fae7c79c67c88e79aa12d61fbc0471a19339648ee3aa20d98bd05d57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:25 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:25.448610693Z" level=info msg="ignoring event" container=ac365f315766ac1d4d2370dc5d94c58c5d59e951a5bb20df14e9c30bcda3ea3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:25 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:25.620706393Z" level=info msg="ignoring event" container=8a53f50dd7e831a06de6e2b3736b244e15ddec69ca626ccc21018250b05d5630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:26 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:26.662283980Z" level=info msg="ignoring event" container=bcd63f5cf2d57a2c173fb80fe2245682e2cefd23fd9854402f507ddaecb15e5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:26 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:26.671093158Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Oct 02 10:47:34 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:34.053021676Z" level=warning msg="Published ports are discarded when using host network mode"
	Oct 02 10:47:34 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:34.074030415Z" level=warning msg="Published ports are discarded when using host network mode"
	Oct 02 10:47:34 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:34.230164331Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Oct 02 10:47:40 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:40.271940337Z" level=info msg="ignoring event" container=15b07a99ffc83eb0d8d52a6f2f35753366f26f879cc57e9d55853e7d99e11b25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:40 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:40.897642300Z" level=info msg="ignoring event" container=b6c6e32a9e52289cfcd51a673fb5036ecea7a9caa4834a1321a10aa10c7c429d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:47:58 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:47:58.332384451Z" level=info msg="ignoring event" container=d7af26af433b78dd14dd2c350d7a8ef215c8f10854dab1bec2e5ed899a145ed3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:48:00 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:00.658738953Z" level=info msg="ignoring event" container=0dffdfb022c68fcfad80bb64bd86ecb79e4119d7b871b42562047b64f77d1ad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:48:01 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:01.112614757Z" level=info msg="ignoring event" container=41b6c9ea1531da7f9794aa84f5c718fd34c347069731472137c1a6db1bcf5ebd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:48:14 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:14.197027184Z" level=info msg="ignoring event" container=38e84aacc47bb30b30c21024723dde528941145b20596fa7851d2980aefa210a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:48:18 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:18.292811696Z" level=info msg="ignoring event" container=c5ecdb0b4fe2772cf24ca23aeb00314659945a5dc05267fc570f81c5ad0839f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:48:19 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:19.040215830Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=a36e5ea06251ba80ee1c6316c62f8390468acadb48a768d172849a1e6517ee14
	Oct 02 10:48:19 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:19.066901913Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=a36e5ea06251ba80ee1c6316c62f8390468acadb48a768d172849a1e6517ee14
	Oct 02 10:48:19 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:19.140499824Z" level=info msg="ignoring event" container=a36e5ea06251ba80ee1c6316c62f8390468acadb48a768d172849a1e6517ee14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:48:19 ingress-addon-legacy-566627 dockerd[1299]: time="2023-10-02T10:48:19.212528602Z" level=info msg="ignoring event" container=c864c7a81fda92a4af6d91233aa596d349ae45f49002eb625aad0e279f8c63e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c5ecdb0b4fe27       97e050c3e21e9                                                                                                      6 seconds ago        Exited              hello-world-app           2                   801270217a69c       hello-world-app-5f5d8b66bb-btnsg
	59c20871781fe       nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                                      33 seconds ago       Running             nginx                     0                   8e5965d7acb56       nginx
	a36e5ea06251b       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   53 seconds ago       Exited              controller                0                   c864c7a81fda9       ingress-nginx-controller-7fcf777cb7-8h9pw
	8a53f50dd7e83       a883f7fc35610                                                                                                      59 seconds ago       Exited              patch                     1                   bcd63f5cf2d57       ingress-nginx-admission-patch-mp66h
	4cf748f7fae7c       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   ac365f315766a       ingress-nginx-admission-create-kkvr9
	9256661ce627a       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   4c05ff93df92c       storage-provisioner
	988df2111e0c9       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   6fd368caa4074       coredns-66bff467f8-hmhrp
	e09ebc5ef652f       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   30c9d86b0a528       kube-proxy-rxfb9
	fb26f5dd7dd64       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   34dbd73023fd6       etcd-ingress-addon-legacy-566627
	9d75cf446ed21       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   6cfa8423581e3       kube-apiserver-ingress-addon-legacy-566627
	08b15f4201767       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   ee37c98164413       kube-scheduler-ingress-addon-legacy-566627
	0c453b5384203       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   f764aacfd1617       kube-controller-manager-ingress-addon-legacy-566627
	
	* 
	* ==> coredns [988df2111e0c] <==
	* [INFO] 172.17.0.1:35211 - 18520 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057001s
	[INFO] 172.17.0.1:45646 - 27827 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001732216s
	[INFO] 172.17.0.1:35211 - 10622 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048713s
	[INFO] 172.17.0.1:45646 - 233 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000341701s
	[INFO] 172.17.0.1:35211 - 55224 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001107613s
	[INFO] 172.17.0.1:35211 - 36173 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000932041s
	[INFO] 172.17.0.1:35211 - 26480 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000506s
	[INFO] 172.17.0.1:59174 - 40939 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00012457s
	[INFO] 172.17.0.1:59174 - 43499 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091249s
	[INFO] 172.17.0.1:16401 - 41658 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037998s
	[INFO] 172.17.0.1:59174 - 35577 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049337s
	[INFO] 172.17.0.1:16401 - 30953 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037587s
	[INFO] 172.17.0.1:59174 - 31419 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036373s
	[INFO] 172.17.0.1:16401 - 28511 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040566s
	[INFO] 172.17.0.1:59174 - 22740 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033994s
	[INFO] 172.17.0.1:16401 - 63684 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034978s
	[INFO] 172.17.0.1:59174 - 1179 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003378s
	[INFO] 172.17.0.1:16401 - 29642 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029407s
	[INFO] 172.17.0.1:16401 - 22141 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008009s
	[INFO] 172.17.0.1:59174 - 53737 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001263485s
	[INFO] 172.17.0.1:16401 - 58501 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001190575s
	[INFO] 172.17.0.1:59174 - 64501 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00094082s
	[INFO] 172.17.0.1:59174 - 19710 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000228947s
	[INFO] 172.17.0.1:16401 - 15258 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001116138s
	[INFO] 172.17.0.1:16401 - 51542 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048385s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-566627
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-566627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=ingress-addon-legacy-566627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T10_46_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:46:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-566627
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:48:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:47:56 +0000   Mon, 02 Oct 2023 10:46:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:47:56 +0000   Mon, 02 Oct 2023 10:46:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:47:56 +0000   Mon, 02 Oct 2023 10:46:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:47:56 +0000   Mon, 02 Oct 2023 10:47:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-566627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 771707c7f4114d3ebac7617c8e41ea0b
	  System UUID:                f6204856-7b6d-4a92-b2ab-3dedf688b026
	  Boot ID:                    8f181a8e-95ee-4bd9-9704-e77c1ff4607b
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-btnsg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 coredns-66bff467f8-hmhrp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     77s
	  kube-system                 etcd-ingress-addon-legacy-566627                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-566627             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-566627    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-rxfb9                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-scheduler-ingress-addon-legacy-566627             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  103s (x5 over 104s)  kubelet     Node ingress-addon-legacy-566627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x5 over 104s)  kubelet     Node ingress-addon-legacy-566627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x4 over 104s)  kubelet     Node ingress-addon-legacy-566627 status is now: NodeHasSufficientPID
	  Normal  Starting                 88s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s                  kubelet     Node ingress-addon-legacy-566627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                  kubelet     Node ingress-addon-legacy-566627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                  kubelet     Node ingress-addon-legacy-566627 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                  kubelet     Node ingress-addon-legacy-566627 status is now: NodeReady
	  Normal  Starting                 76s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] '8f6b3b0000000000'
	[  +0.000705] FS-Cache: N-cookie c=000000ae [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000650f42e0
	[  +0.001051] FS-Cache: N-key=[8] '8f6b3b0000000000'
	[  +0.002955] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=000000a8 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=00000000590f981e
	[  +0.001043] FS-Cache: O-key=[8] '8f6b3b0000000000'
	[  +0.000711] FS-Cache: N-cookie c=000000af [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000119233c1
	[  +0.001043] FS-Cache: N-key=[8] '8f6b3b0000000000'
	[Oct 2 10:45] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=000000a6 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=00000000df3c7dbb
	[  +0.001149] FS-Cache: O-key=[8] '8e6b3b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=000000b1 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000650f42e0
	[  +0.001057] FS-Cache: N-key=[8] '8e6b3b0000000000'
	[  +0.286574] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=000000ab [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000954] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=000000000fcfab79
	[  +0.001077] FS-Cache: O-key=[8] '946b3b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=000000b2 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000ff2999f7
	[  +0.001033] FS-Cache: N-key=[8] '946b3b0000000000'
	
	* 
	* ==> etcd [fb26f5dd7dd6] <==
	* raft2023/10/02 10:46:44 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/02 10:46:44 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/02 10:46:44 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/02 10:46:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-02 10:46:44.837840 W | auth: simple token is not cryptographically signed
	2023-10-02 10:46:44.841444 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-02 10:46:44.850915 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 10:46:44.851724 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/02 10:46:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-02 10:46:44.852108 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-02 10:46:44.852209 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 10:46:44.852313 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/10/02 10:46:45 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/02 10:46:45 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/02 10:46:45 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/02 10:46:45 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/02 10:46:45 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-02 10:46:45.425992 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-02 10:46:45.426825 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-02 10:46:45.426888 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-02 10:46:45.426989 I | etcdserver: published {Name:ingress-addon-legacy-566627 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-02 10:46:45.427050 I | embed: ready to serve client requests
	2023-10-02 10:46:45.428336 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-02 10:46:45.429360 I | embed: ready to serve client requests
	2023-10-02 10:46:45.430491 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  10:48:24 up 18:30,  0 users,  load average: 1.72, 1.97, 1.98
	Linux ingress-addon-legacy-566627 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [9d75cf446ed2] <==
	* I1002 10:46:49.395170       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1002 10:46:49.403996       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1002 10:46:49.489763       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 10:46:49.498864       1 cache.go:39] Caches are synced for autoregister controller
	I1002 10:46:49.499445       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 10:46:49.499682       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1002 10:46:49.504266       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1002 10:46:50.286897       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1002 10:46:50.287204       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 10:46:50.295200       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1002 10:46:50.299277       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1002 10:46:50.299298       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1002 10:46:50.693446       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 10:46:50.739136       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 10:46:50.869576       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 10:46:50.870658       1 controller.go:609] quota admission added evaluator for: endpoints
	I1002 10:46:50.874601       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 10:46:51.736339       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1002 10:46:52.408597       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1002 10:46:52.557955       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1002 10:46:56.036569       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 10:47:07.284774       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1002 10:47:07.397486       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1002 10:47:22.384000       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1002 10:47:48.855934       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [0c453b538420] <==
	* I1002 10:47:07.581285       1 shared_informer.go:230] Caches are synced for endpoint 
	I1002 10:47:07.581752       1 shared_informer.go:230] Caches are synced for attach detach 
	I1002 10:47:07.613425       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1002 10:47:07.631183       1 shared_informer.go:230] Caches are synced for stateful set 
	I1002 10:47:07.631436       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1002 10:47:07.631636       1 shared_informer.go:230] Caches are synced for expand 
	I1002 10:47:07.633774       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b45810be-744c-48f2-9bba-693ba75c64df", APIVersion:"apps/v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-gzwgw
	I1002 10:47:07.637001       1 shared_informer.go:230] Caches are synced for disruption 
	I1002 10:47:07.637014       1 disruption.go:339] Sending events to api server.
	I1002 10:47:07.661363       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1002 10:47:07.715929       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1002 10:47:07.736512       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 10:47:07.736538       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1002 10:47:07.736626       1 shared_informer.go:230] Caches are synced for job 
	I1002 10:47:07.749452       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 10:47:07.787004       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 10:47:07.788494       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 10:47:22.380248       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6673e14e-23ba-438f-84d4-bc4c0afa52a1", APIVersion:"apps/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1002 10:47:22.405139       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1a542ade-e4b9-42a8-ba5c-629bad2ef17f", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-8h9pw
	I1002 10:47:22.418354       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"dd68714f-9b5b-4362-9d80-d3f50386f083", APIVersion:"batch/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-kkvr9
	I1002 10:47:22.516351       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"868f40c6-17e2-4355-a28b-58a68588ce8c", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-mp66h
	I1002 10:47:25.413497       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"dd68714f-9b5b-4362-9d80-d3f50386f083", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 10:47:26.630746       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"868f40c6-17e2-4355-a28b-58a68588ce8c", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 10:47:57.591439       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"e96a1a04-ad5f-420a-a2e2-2a49b40751f0", APIVersion:"apps/v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1002 10:47:57.609394       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"37bb1df9-d155-4e7f-817e-2b35cbd0ed9b", APIVersion:"apps/v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-btnsg
	
	* 
	* ==> kube-proxy [e09ebc5ef652] <==
	* W1002 10:47:08.447570       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1002 10:47:08.465447       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1002 10:47:08.465483       1 server_others.go:186] Using iptables Proxier.
	I1002 10:47:08.465776       1 server.go:583] Version: v1.18.20
	I1002 10:47:08.470654       1 config.go:315] Starting service config controller
	I1002 10:47:08.470692       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1002 10:47:08.470834       1 config.go:133] Starting endpoints config controller
	I1002 10:47:08.470840       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1002 10:47:08.572017       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1002 10:47:08.572163       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [08b15f420176] <==
	* W1002 10:46:49.422565       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 10:46:49.422645       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 10:46:49.476346       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 10:46:49.476541       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 10:46:49.478838       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1002 10:46:49.479142       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:46:49.479242       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:46:49.479335       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1002 10:46:49.517638       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:46:49.517955       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 10:46:49.518137       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 10:46:49.518328       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 10:46:49.518479       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:46:49.518642       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:46:49.518785       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 10:46:49.518926       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:46:49.519068       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:46:49.519205       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 10:46:49.519460       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:46:49.529327       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:46:50.333346       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:46:50.451296       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:46:50.581518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1002 10:46:50.779496       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1002 10:47:07.357034       1 factory.go:503] pod: kube-system/coredns-66bff467f8-hmhrp is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 02 10:48:02 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:02.989348    2840 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 41b6c9ea1531da7f9794aa84f5c718fd34c347069731472137c1a6db1bcf5ebd
	Oct 02 10:48:02 ingress-addon-legacy-566627 kubelet[2840]: E1002 10:48:02.990705    2840 pod_workers.go:191] Error syncing pod 1093e41d-b59c-4a85-ab95-57137e88d694 ("hello-world-app-5f5d8b66bb-btnsg_default(1093e41d-b59c-4a85-ab95-57137e88d694)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-btnsg_default(1093e41d-b59c-4a85-ab95-57137e88d694)"
	Oct 02 10:48:13 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:13.159562    2840 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d7af26af433b78dd14dd2c350d7a8ef215c8f10854dab1bec2e5ed899a145ed3
	Oct 02 10:48:13 ingress-addon-legacy-566627 kubelet[2840]: E1002 10:48:13.159873    2840 pod_workers.go:191] Error syncing pod 7fa1d959-4593-421d-8e5f-c4aeb339e600 ("kube-ingress-dns-minikube_kube-system(7fa1d959-4593-421d-8e5f-c4aeb339e600)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(7fa1d959-4593-421d-8e5f-c4aeb339e600)"
	Oct 02 10:48:13 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:13.484654    2840 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-hm9mj" (UniqueName: "kubernetes.io/secret/7fa1d959-4593-421d-8e5f-c4aeb339e600-minikube-ingress-dns-token-hm9mj") pod "7fa1d959-4593-421d-8e5f-c4aeb339e600" (UID: "7fa1d959-4593-421d-8e5f-c4aeb339e600")
	Oct 02 10:48:13 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:13.490837    2840 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa1d959-4593-421d-8e5f-c4aeb339e600-minikube-ingress-dns-token-hm9mj" (OuterVolumeSpecName: "minikube-ingress-dns-token-hm9mj") pod "7fa1d959-4593-421d-8e5f-c4aeb339e600" (UID: "7fa1d959-4593-421d-8e5f-c4aeb339e600"). InnerVolumeSpecName "minikube-ingress-dns-token-hm9mj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:48:13 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:13.585024    2840 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-hm9mj" (UniqueName: "kubernetes.io/secret/7fa1d959-4593-421d-8e5f-c4aeb339e600-minikube-ingress-dns-token-hm9mj") on node "ingress-addon-legacy-566627" DevicePath ""
	Oct 02 10:48:15 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:15.083302    2840 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d7af26af433b78dd14dd2c350d7a8ef215c8f10854dab1bec2e5ed899a145ed3
	Oct 02 10:48:17 ingress-addon-legacy-566627 kubelet[2840]: E1002 10:48:17.029532    2840 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8h9pw.178a44a7c5ad841d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8h9pw", UID:"52502c8c-4d0b-4215-92ff-b35fb9451904", APIVersion:"v1", ResourceVersion:"457", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-566627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec5dc417f5a1d, ext:84705751063, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec5dc417f5a1d, ext:84705751063, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8h9pw.178a44a7c5ad841d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 10:48:17 ingress-addon-legacy-566627 kubelet[2840]: E1002 10:48:17.051339    2840 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8h9pw.178a44a7c5ad841d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8h9pw", UID:"52502c8c-4d0b-4215-92ff-b35fb9451904", APIVersion:"v1", ResourceVersion:"457", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-566627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec5dc417f5a1d, ext:84705751063, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec5dc426c4060, ext:84721276507, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8h9pw.178a44a7c5ad841d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 10:48:18 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:18.159705    2840 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 41b6c9ea1531da7f9794aa84f5c718fd34c347069731472137c1a6db1bcf5ebd
	Oct 02 10:48:18 ingress-addon-legacy-566627 kubelet[2840]: W1002 10:48:18.324509    2840 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod1093e41d-b59c-4a85-ab95-57137e88d694/c5ecdb0b4fe2772cf24ca23aeb00314659945a5dc05267fc570f81c5ad0839f8": none of the resources are being tracked.
	Oct 02 10:48:19 ingress-addon-legacy-566627 kubelet[2840]: W1002 10:48:19.117834    2840 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-btnsg through plugin: invalid network status for
	Oct 02 10:48:19 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:19.123127    2840 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 41b6c9ea1531da7f9794aa84f5c718fd34c347069731472137c1a6db1bcf5ebd
	Oct 02 10:48:19 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:19.123448    2840 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c5ecdb0b4fe2772cf24ca23aeb00314659945a5dc05267fc570f81c5ad0839f8
	Oct 02 10:48:19 ingress-addon-legacy-566627 kubelet[2840]: E1002 10:48:19.123676    2840 pod_workers.go:191] Error syncing pod 1093e41d-b59c-4a85-ab95-57137e88d694 ("hello-world-app-5f5d8b66bb-btnsg_default(1093e41d-b59c-4a85-ab95-57137e88d694)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-btnsg_default(1093e41d-b59c-4a85-ab95-57137e88d694)"
	Oct 02 10:48:20 ingress-addon-legacy-566627 kubelet[2840]: W1002 10:48:20.136574    2840 pod_container_deletor.go:77] Container "c864c7a81fda92a4af6d91233aa596d349ae45f49002eb625aad0e279f8c63e6" not found in pod's containers
	Oct 02 10:48:20 ingress-addon-legacy-566627 kubelet[2840]: W1002 10:48:20.139147    2840 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-btnsg through plugin: invalid network status for
	Oct 02 10:48:21 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:21.107059    2840 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/52502c8c-4d0b-4215-92ff-b35fb9451904-webhook-cert") pod "52502c8c-4d0b-4215-92ff-b35fb9451904" (UID: "52502c8c-4d0b-4215-92ff-b35fb9451904")
	Oct 02 10:48:21 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:21.107108    2840 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sv6kz" (UniqueName: "kubernetes.io/secret/52502c8c-4d0b-4215-92ff-b35fb9451904-ingress-nginx-token-sv6kz") pod "52502c8c-4d0b-4215-92ff-b35fb9451904" (UID: "52502c8c-4d0b-4215-92ff-b35fb9451904")
	Oct 02 10:48:21 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:21.113428    2840 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52502c8c-4d0b-4215-92ff-b35fb9451904-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "52502c8c-4d0b-4215-92ff-b35fb9451904" (UID: "52502c8c-4d0b-4215-92ff-b35fb9451904"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:48:21 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:21.114319    2840 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52502c8c-4d0b-4215-92ff-b35fb9451904-ingress-nginx-token-sv6kz" (OuterVolumeSpecName: "ingress-nginx-token-sv6kz") pod "52502c8c-4d0b-4215-92ff-b35fb9451904" (UID: "52502c8c-4d0b-4215-92ff-b35fb9451904"). InnerVolumeSpecName "ingress-nginx-token-sv6kz". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:48:21 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:21.207415    2840 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/52502c8c-4d0b-4215-92ff-b35fb9451904-webhook-cert") on node "ingress-addon-legacy-566627" DevicePath ""
	Oct 02 10:48:21 ingress-addon-legacy-566627 kubelet[2840]: I1002 10:48:21.207478    2840 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sv6kz" (UniqueName: "kubernetes.io/secret/52502c8c-4d0b-4215-92ff-b35fb9451904-ingress-nginx-token-sv6kz") on node "ingress-addon-legacy-566627" DevicePath ""
	Oct 02 10:48:22 ingress-addon-legacy-566627 kubelet[2840]: W1002 10:48:22.173480    2840 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/52502c8c-4d0b-4215-92ff-b35fb9451904/volumes" does not exist
	
	* 
	* ==> storage-provisioner [9256661ce627] <==
	* I1002 10:47:10.822888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:47:10.838631       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:47:10.838839       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:47:10.846372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:47:10.846991       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-566627_4ea9def0-55f1-40dc-b0b3-b1f12de46b5f!
	I1002 10:47:10.846650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44f584f3-fc76-46be-9f27-f4d1035b21f5", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-566627_4ea9def0-55f1-40dc-b0b3-b1f12de46b5f became leader
	I1002 10:47:10.947761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-566627_4ea9def0-55f1-40dc-b0b3-b1f12de46b5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-566627 -n ingress-addon-legacy-566627
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-566627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (51.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (272.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-899833
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-899833
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-899833: (22.832611153s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-899833 --wait=true -v=8 --alsologtostderr
E1002 10:57:33.692130 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:58:01.375318 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:58:35.509162 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:59:20.137783 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:59:58.556178 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-899833 --wait=true -v=8 --alsologtostderr: exit status 80 (4m3.743907336s)

                                                
                                                
-- stdout --
	* [multinode-899833] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-899833 in cluster multinode-899833
	* Pulling base image ...
	* Restarting existing docker container for "multinode-899833" ...
	* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-899833-m02 in cluster multinode-899833
	* Pulling base image ...
	* Restarting existing docker container for "multinode-899833-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	  - env NO_PROXY=192.168.58.2
	* Verifying Kubernetes components...
	* Starting worker node multinode-899833-m03 in cluster multinode-899833
	* Pulling base image ...
	* Restarting existing docker container for "multinode-899833-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2,192.168.58.3
	* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	  - env NO_PROXY=192.168.58.2
	  - env NO_PROXY=192.168.58.2,192.168.58.3
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 10:57:26.768477 2249882 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:57:26.768622 2249882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:57:26.768632 2249882 out.go:309] Setting ErrFile to fd 2...
	I1002 10:57:26.768638 2249882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:57:26.768905 2249882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:57:26.769311 2249882 out.go:303] Setting JSON to false
	I1002 10:57:26.770346 2249882 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":67194,"bootTime":1696177053,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:57:26.770426 2249882 start.go:138] virtualization:  
	I1002 10:57:26.773077 2249882 out.go:177] * [multinode-899833] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:57:26.775244 2249882 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:57:26.776994 2249882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:57:26.775488 2249882 notify.go:220] Checking for updates...
	I1002 10:57:26.781246 2249882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:57:26.783234 2249882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:57:26.784926 2249882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 10:57:26.786898 2249882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:57:26.789072 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:57:26.789231 2249882 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:57:26.813322 2249882 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:57:26.813437 2249882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:57:26.895464 2249882 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 10:57:26.885241881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:57:26.895576 2249882 docker.go:294] overlay module found
	I1002 10:57:26.897834 2249882 out.go:177] * Using the docker driver based on existing profile
	I1002 10:57:26.899393 2249882 start.go:298] selected driver: docker
	I1002 10:57:26.899410 2249882 start.go:902] validating driver "docker" against &{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:57:26.899557 2249882 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:57:26.899665 2249882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:57:26.971955 2249882 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 10:57:26.954531215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:57:26.972353 2249882 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:57:26.972382 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:57:26.972390 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:57:26.972402 2249882 start_flags.go:321] config:
	{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidi
a-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:57:26.975780 2249882 out.go:177] * Starting control plane node multinode-899833 in cluster multinode-899833
	I1002 10:57:26.977703 2249882 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:57:26.979514 2249882 out.go:177] * Pulling base image ...
	I1002 10:57:26.981623 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:57:26.981681 2249882 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 10:57:26.981698 2249882 cache.go:57] Caching tarball of preloaded images
	I1002 10:57:26.981797 2249882 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:57:26.981813 2249882 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:57:26.981954 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:57:26.982169 2249882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:57:27.014795 2249882 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:57:27.014825 2249882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:57:27.014846 2249882 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:57:27.014919 2249882 start.go:365] acquiring machines lock for multinode-899833: {Name:mk4b54e7aae7d30b0899f0f511ab22ae73c52c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:57:27.014997 2249882 start.go:369] acquired machines lock for "multinode-899833" in 45.178µs
	I1002 10:57:27.015023 2249882 start.go:96] Skipping create...Using existing machine configuration
	I1002 10:57:27.015032 2249882 fix.go:54] fixHost starting: 
	I1002 10:57:27.015306 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833 --format={{.State.Status}}
	I1002 10:57:27.036286 2249882 fix.go:102] recreateIfNeeded on multinode-899833: state=Stopped err=<nil>
	W1002 10:57:27.036327 2249882 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 10:57:27.038625 2249882 out.go:177] * Restarting existing docker container for "multinode-899833" ...
	I1002 10:57:27.040396 2249882 cli_runner.go:164] Run: docker start multinode-899833
	I1002 10:57:27.418446 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833 --format={{.State.Status}}
	I1002 10:57:27.443884 2249882 kic.go:426] container "multinode-899833" state is running.
	I1002 10:57:27.444258 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:57:27.468901 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:57:27.469140 2249882 machine.go:88] provisioning docker machine ...
	I1002 10:57:27.469160 2249882 ubuntu.go:169] provisioning hostname "multinode-899833"
	I1002 10:57:27.469212 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:27.491549 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:27.491982 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:27.492002 2249882 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899833 && echo "multinode-899833" | sudo tee /etc/hostname
	I1002 10:57:27.492707 2249882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 10:57:30.647760 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899833
	
	I1002 10:57:30.647845 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:30.666440 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:30.666852 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:30.666880 2249882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899833/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:57:30.806460 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:57:30.806488 2249882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:57:30.806523 2249882 ubuntu.go:177] setting up certificates
	I1002 10:57:30.806533 2249882 provision.go:83] configureAuth start
	I1002 10:57:30.806603 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:57:30.827352 2249882 provision.go:138] copyHostCerts
	I1002 10:57:30.827394 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:57:30.827425 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:57:30.827436 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:57:30.827516 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:57:30.827649 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:57:30.827673 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:57:30.827682 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:57:30.827715 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:57:30.827763 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:57:30.827785 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:57:30.827792 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:57:30.827818 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:57:30.827869 2249882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.multinode-899833 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-899833]
	I1002 10:57:31.107517 2249882 provision.go:172] copyRemoteCerts
	I1002 10:57:31.107590 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:57:31.107634 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.131593 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:31.231676 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:57:31.231734 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:57:31.260950 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:57:31.261030 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 10:57:31.289286 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:57:31.289345 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:57:31.317222 2249882 provision.go:86] duration metric: configureAuth took 510.650177ms
	I1002 10:57:31.317248 2249882 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:57:31.317510 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:57:31.317574 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.334897 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:31.335308 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:31.335325 2249882 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:57:31.471272 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:57:31.471294 2249882 ubuntu.go:71] root file system type: overlay
	I1002 10:57:31.471411 2249882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:57:31.471486 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.492425 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:31.492855 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:31.492939 2249882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:57:31.644068 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:57:31.644167 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.663017 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:31.663447 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:31.663471 2249882 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:57:31.809123 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:57:31.809143 2249882 machine.go:91] provisioned docker machine in 4.33998987s
	I1002 10:57:31.809154 2249882 start.go:300] post-start starting for "multinode-899833" (driver="docker")
	I1002 10:57:31.809164 2249882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:57:31.809235 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:57:31.809305 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.829590 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:31.928639 2249882 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:57:31.932917 2249882 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 10:57:31.932978 2249882 command_runner.go:130] > NAME="Ubuntu"
	I1002 10:57:31.932991 2249882 command_runner.go:130] > VERSION_ID="22.04"
	I1002 10:57:31.932999 2249882 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 10:57:31.933005 2249882 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 10:57:31.933009 2249882 command_runner.go:130] > ID=ubuntu
	I1002 10:57:31.933015 2249882 command_runner.go:130] > ID_LIKE=debian
	I1002 10:57:31.933021 2249882 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 10:57:31.933031 2249882 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 10:57:31.933048 2249882 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 10:57:31.933061 2249882 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 10:57:31.933067 2249882 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 10:57:31.933126 2249882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:57:31.933155 2249882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:57:31.933169 2249882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:57:31.933181 2249882 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:57:31.933191 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:57:31.933274 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:57:31.933361 2249882 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:57:31.933374 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:57:31.933474 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:57:31.944522 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:57:31.973961 2249882 start.go:303] post-start completed in 164.777009ms
	I1002 10:57:31.974050 2249882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:57:31.974092 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.993542 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:32.087357 2249882 command_runner.go:130] > 12%
	I1002 10:57:32.087439 2249882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:57:32.093215 2249882 command_runner.go:130] > 173G
	I1002 10:57:32.093274 2249882 fix.go:56] fixHost completed within 5.078239781s
	I1002 10:57:32.093286 2249882 start.go:83] releasing machines lock for "multinode-899833", held for 5.078277091s
	I1002 10:57:32.093382 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:57:32.110539 2249882 ssh_runner.go:195] Run: cat /version.json
	I1002 10:57:32.110596 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:32.110647 2249882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:57:32.110715 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:32.134165 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:32.142856 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:32.358018 2249882 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 10:57:32.358112 2249882 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "c590c2ca0a7db48c4b84c041c2699711a39ab56a"}
	I1002 10:57:32.358269 2249882 ssh_runner.go:195] Run: systemctl --version
	I1002 10:57:32.363476 2249882 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1002 10:57:32.363508 2249882 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 10:57:32.363871 2249882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:57:32.368809 2249882 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 10:57:32.368832 2249882 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1002 10:57:32.368840 2249882 command_runner.go:130] > Device: 36h/54d	Inode: 1835920     Links: 1
	I1002 10:57:32.368848 2249882 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:57:32.368891 2249882 command_runner.go:130] > Access: 2023-10-02 10:54:22.955277017 +0000
	I1002 10:57:32.368906 2249882 command_runner.go:130] > Modify: 2023-10-02 10:54:22.923277186 +0000
	I1002 10:57:32.368914 2249882 command_runner.go:130] > Change: 2023-10-02 10:54:22.923277186 +0000
	I1002 10:57:32.368925 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:54:22.923277186 +0000
	I1002 10:57:32.369288 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:57:32.390697 2249882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:57:32.390787 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:57:32.401404 2249882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 10:57:32.401431 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:57:32.401466 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:57:32.401569 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:57:32.419560 2249882 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 10:57:32.421071 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:57:32.432652 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:57:32.444254 2249882 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:57:32.444327 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:57:32.455927 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:57:32.467556 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:57:32.478762 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:57:32.490227 2249882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:57:32.500867 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:57:32.512455 2249882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:57:32.521344 2249882 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 10:57:32.522422 2249882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:57:32.532497 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:32.646421 2249882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:57:32.761354 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:57:32.761402 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:57:32.761461 2249882 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:57:32.782419 2249882 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1002 10:57:32.782856 2249882 command_runner.go:130] > [Unit]
	I1002 10:57:32.782881 2249882 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 10:57:32.782888 2249882 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 10:57:32.782894 2249882 command_runner.go:130] > BindsTo=containerd.service
	I1002 10:57:32.782902 2249882 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1002 10:57:32.782907 2249882 command_runner.go:130] > Wants=network-online.target
	I1002 10:57:32.782919 2249882 command_runner.go:130] > Requires=docker.socket
	I1002 10:57:32.782924 2249882 command_runner.go:130] > StartLimitBurst=3
	I1002 10:57:32.782932 2249882 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 10:57:32.782938 2249882 command_runner.go:130] > [Service]
	I1002 10:57:32.782946 2249882 command_runner.go:130] > Type=notify
	I1002 10:57:32.782955 2249882 command_runner.go:130] > Restart=on-failure
	I1002 10:57:32.782965 2249882 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 10:57:32.782983 2249882 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 10:57:32.782995 2249882 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 10:57:32.783004 2249882 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 10:57:32.783014 2249882 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 10:57:32.783022 2249882 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 10:57:32.783031 2249882 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 10:57:32.783051 2249882 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 10:57:32.783064 2249882 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 10:57:32.783069 2249882 command_runner.go:130] > ExecStart=
	I1002 10:57:32.783089 2249882 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1002 10:57:32.783098 2249882 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 10:57:32.783107 2249882 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 10:57:32.783115 2249882 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 10:57:32.783122 2249882 command_runner.go:130] > LimitNOFILE=infinity
	I1002 10:57:32.783127 2249882 command_runner.go:130] > LimitNPROC=infinity
	I1002 10:57:32.783140 2249882 command_runner.go:130] > LimitCORE=infinity
	I1002 10:57:32.783147 2249882 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 10:57:32.783158 2249882 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 10:57:32.783164 2249882 command_runner.go:130] > TasksMax=infinity
	I1002 10:57:32.783169 2249882 command_runner.go:130] > TimeoutStartSec=0
	I1002 10:57:32.783177 2249882 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 10:57:32.783185 2249882 command_runner.go:130] > Delegate=yes
	I1002 10:57:32.783192 2249882 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 10:57:32.783200 2249882 command_runner.go:130] > KillMode=process
	I1002 10:57:32.783214 2249882 command_runner.go:130] > [Install]
	I1002 10:57:32.783220 2249882 command_runner.go:130] > WantedBy=multi-user.target
	I1002 10:57:32.785023 2249882 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:57:32.785094 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:57:32.799859 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:57:32.820739 2249882 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 10:57:32.822761 2249882 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:57:32.826987 2249882 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 10:57:32.827615 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:57:32.838811 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:57:32.866902 2249882 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:57:32.989771 2249882 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:57:33.099603 2249882 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:57:33.099762 2249882 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:57:33.125952 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:33.239579 2249882 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:57:33.664818 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:57:33.769951 2249882 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:57:33.870608 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:57:33.970340 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:34.075278 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:57:34.093128 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:34.199243 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:57:34.295953 2249882 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:57:34.296023 2249882 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:57:34.300611 2249882 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 10:57:34.300635 2249882 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 10:57:34.300645 2249882 command_runner.go:130] > Device: 43h/67d	Inode: 231         Links: 1
	I1002 10:57:34.300654 2249882 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1002 10:57:34.300661 2249882 command_runner.go:130] > Access: 2023-10-02 10:57:34.206254272 +0000
	I1002 10:57:34.300667 2249882 command_runner.go:130] > Modify: 2023-10-02 10:57:34.206254272 +0000
	I1002 10:57:34.300673 2249882 command_runner.go:130] > Change: 2023-10-02 10:57:34.210254250 +0000
	I1002 10:57:34.300684 2249882 command_runner.go:130] >  Birth: -
	I1002 10:57:34.301016 2249882 start.go:537] Will wait 60s for crictl version
	I1002 10:57:34.301071 2249882 ssh_runner.go:195] Run: which crictl
	I1002 10:57:34.305485 2249882 command_runner.go:130] > /usr/bin/crictl
	I1002 10:57:34.305934 2249882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:57:34.358718 2249882 command_runner.go:130] > Version:  0.1.0
	I1002 10:57:34.358740 2249882 command_runner.go:130] > RuntimeName:  docker
	I1002 10:57:34.358746 2249882 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 10:57:34.358753 2249882 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 10:57:34.361283 2249882 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:57:34.361358 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:57:34.386601 2249882 command_runner.go:130] > 24.0.6
	I1002 10:57:34.387882 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:57:34.412031 2249882 command_runner.go:130] > 24.0.6
	I1002 10:57:34.417585 2249882 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:57:34.417727 2249882 cli_runner.go:164] Run: docker network inspect multinode-899833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:57:34.435497 2249882 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 10:57:34.439960 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:57:34.452968 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:57:34.453043 2249882 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:57:34.472225 2249882 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1002 10:57:34.472246 2249882 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 10:57:34.472253 2249882 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1002 10:57:34.472260 2249882 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1002 10:57:34.472266 2249882 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1002 10:57:34.472272 2249882 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1002 10:57:34.472279 2249882 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1002 10:57:34.472286 2249882 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1002 10:57:34.472293 2249882 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:57:34.472303 2249882 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1002 10:57:34.473901 2249882 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1002 10:57:34.473925 2249882 docker.go:594] Images already preloaded, skipping extraction
	I1002 10:57:34.473990 2249882 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:57:34.493305 2249882 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1002 10:57:34.493335 2249882 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1002 10:57:34.493343 2249882 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1002 10:57:34.493364 2249882 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 10:57:34.493371 2249882 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1002 10:57:34.493381 2249882 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1002 10:57:34.493390 2249882 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1002 10:57:34.493403 2249882 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1002 10:57:34.493410 2249882 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:57:34.493416 2249882 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1002 10:57:34.495324 2249882 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1002 10:57:34.495355 2249882 cache_images.go:84] Images are preloaded, skipping loading
	I1002 10:57:34.495445 2249882 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:57:34.564548 2249882 command_runner.go:130] > cgroupfs
	I1002 10:57:34.565837 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:57:34.565851 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:57:34.565893 2249882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:57:34.565914 2249882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899833 NodeName:multinode-899833 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:57:34.566051 2249882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899833"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:57:34.566122 2249882 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-899833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:57:34.566187 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:57:34.578173 2249882 command_runner.go:130] > kubeadm
	I1002 10:57:34.578194 2249882 command_runner.go:130] > kubectl
	I1002 10:57:34.578200 2249882 command_runner.go:130] > kubelet
	I1002 10:57:34.579368 2249882 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:57:34.579455 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 10:57:34.590525 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 10:57:34.611752 2249882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:57:34.633111 2249882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1002 10:57:34.654223 2249882 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:57:34.658635 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:57:34.672523 2249882 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833 for IP: 192.168.58.2
	I1002 10:57:34.672556 2249882 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:57:34.672722 2249882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:57:34.672776 2249882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:57:34.672862 2249882 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key
	I1002 10:57:34.672966 2249882 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.key.cee25041
	I1002 10:57:34.673020 2249882 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.key
	I1002 10:57:34.673035 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 10:57:34.673052 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 10:57:34.673075 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 10:57:34.673094 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 10:57:34.673106 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:57:34.673123 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:57:34.673147 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:57:34.673163 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:57:34.673227 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:57:34.673302 2249882 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:57:34.673319 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:57:34.673349 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:57:34.673392 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:57:34.673431 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:57:34.673489 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:57:34.673538 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:34.673555 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:57:34.673568 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:57:34.674198 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 10:57:34.702711 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 10:57:34.730794 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 10:57:34.759640 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 10:57:34.790252 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:57:34.818169 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:57:34.846612 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:57:34.875667 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:57:34.904333 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:57:34.933237 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:57:34.961882 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:57:34.990398 2249882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 10:57:35.013604 2249882 ssh_runner.go:195] Run: openssl version
	I1002 10:57:35.020823 2249882 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 10:57:35.021234 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:57:35.034353 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.039405 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.039431 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.039497 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.047903 2249882 command_runner.go:130] > b5213941
	I1002 10:57:35.048289 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:57:35.059634 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:57:35.071840 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.076656 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.076702 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.076760 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.085967 2249882 command_runner.go:130] > 51391683
	I1002 10:57:35.086057 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:57:35.098244 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:57:35.110695 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.115887 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.115919 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.115997 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.125048 2249882 command_runner.go:130] > 3ec20f2e
	I1002 10:57:35.125205 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:57:35.136624 2249882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:57:35.141117 2249882 command_runner.go:130] > ca.crt
	I1002 10:57:35.141138 2249882 command_runner.go:130] > ca.key
	I1002 10:57:35.141144 2249882 command_runner.go:130] > healthcheck-client.crt
	I1002 10:57:35.141150 2249882 command_runner.go:130] > healthcheck-client.key
	I1002 10:57:35.141156 2249882 command_runner.go:130] > peer.crt
	I1002 10:57:35.141160 2249882 command_runner.go:130] > peer.key
	I1002 10:57:35.141173 2249882 command_runner.go:130] > server.crt
	I1002 10:57:35.141180 2249882 command_runner.go:130] > server.key
	I1002 10:57:35.141323 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 10:57:35.149908 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.150289 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 10:57:35.158843 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.159258 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 10:57:35.167843 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.168264 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 10:57:35.177020 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.177501 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 10:57:35.186125 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.186517 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 10:57:35.195262 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.195324 2249882 kubeadm.go:404] StartCluster: {Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 A
utoPauseInterval:1m0s}
	I1002 10:57:35.195506 2249882 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 10:57:35.216593 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 10:57:35.226419 2249882 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 10:57:35.226489 2249882 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 10:57:35.226512 2249882 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 10:57:35.226532 2249882 command_runner.go:130] > member
	I1002 10:57:35.227627 2249882 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 10:57:35.227645 2249882 kubeadm.go:636] restartCluster start
	I1002 10:57:35.227702 2249882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 10:57:35.237831 2249882 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:35.238281 2249882 kubeconfig.go:135] verify returned: extract IP: "multinode-899833" does not appear in /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:57:35.238376 2249882 kubeconfig.go:146] "multinode-899833" context is missing from /home/jenkins/minikube-integration/17340-2134307/kubeconfig - will repair!
	I1002 10:57:35.238651 2249882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/kubeconfig: {Name:mk62f5c672074becc8cade8f73c1bedcd1d2907c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:57:35.239081 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:57:35.239360 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:57:35.240252 2249882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 10:57:35.240329 2249882 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 10:57:35.251415 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:35.251536 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:35.263441 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:35.263473 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:35.263527 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:35.275752 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:35.776460 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:35.776566 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:35.788569 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:36.275953 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:36.276043 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:36.288115 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:36.776738 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:36.776831 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:36.789002 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:37.276561 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:37.276654 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:37.288780 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:37.775909 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:37.776016 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:37.787877 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:38.276554 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:38.276658 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:38.288439 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:38.776036 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:38.776122 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:38.788248 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:39.276865 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:39.276970 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:39.288857 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:39.776557 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:39.776643 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:39.788732 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:40.275942 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:40.276057 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:40.287885 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:40.776508 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:40.776595 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:40.788381 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:41.275918 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:41.276008 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:41.288287 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:41.776078 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:41.776182 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:41.788178 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:42.276754 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:42.276844 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:42.289744 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:42.775912 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:42.776019 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:42.788320 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:43.275921 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:43.276017 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:43.288423 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:43.775884 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:43.775980 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:43.788801 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:44.276477 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:44.276577 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:44.288972 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:44.776668 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:44.776778 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:44.789301 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:45.252038 2249882 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 10:57:45.252071 2249882 kubeadm.go:1128] stopping kube-system containers ...
	I1002 10:57:45.252152 2249882 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 10:57:45.277330 2249882 command_runner.go:130] > f0ac914e78fc
	I1002 10:57:45.277349 2249882 command_runner.go:130] > 7f68c6c1b9a9
	I1002 10:57:45.277355 2249882 command_runner.go:130] > 65189e7d31ed
	I1002 10:57:45.277360 2249882 command_runner.go:130] > 71790b749215
	I1002 10:57:45.277366 2249882 command_runner.go:130] > 9e6412863248
	I1002 10:57:45.277372 2249882 command_runner.go:130] > 4e559448cbec
	I1002 10:57:45.277377 2249882 command_runner.go:130] > 7264383872ff
	I1002 10:57:45.277382 2249882 command_runner.go:130] > 659c42600174
	I1002 10:57:45.277387 2249882 command_runner.go:130] > 584b6ab2c0e0
	I1002 10:57:45.277393 2249882 command_runner.go:130] > d027a8a33607
	I1002 10:57:45.277398 2249882 command_runner.go:130] > a82e59828796
	I1002 10:57:45.277402 2249882 command_runner.go:130] > 1bdae6fab8f9
	I1002 10:57:45.277407 2249882 command_runner.go:130] > 0beca8ac2d3b
	I1002 10:57:45.277414 2249882 command_runner.go:130] > c595b0a59f0e
	I1002 10:57:45.277419 2249882 command_runner.go:130] > 832b4901b722
	I1002 10:57:45.277423 2249882 command_runner.go:130] > 09f490c928ae
	I1002 10:57:45.277428 2249882 command_runner.go:130] > 68f88034ce87
	I1002 10:57:45.277438 2249882 command_runner.go:130] > 0db8e2ef374c
	I1002 10:57:45.277691 2249882 docker.go:463] Stopping containers: [f0ac914e78fc 7f68c6c1b9a9 65189e7d31ed 71790b749215 9e6412863248 4e559448cbec 7264383872ff 659c42600174 584b6ab2c0e0 d027a8a33607 a82e59828796 1bdae6fab8f9 0beca8ac2d3b c595b0a59f0e 832b4901b722 09f490c928ae 68f88034ce87 0db8e2ef374c]
	I1002 10:57:45.277781 2249882 ssh_runner.go:195] Run: docker stop f0ac914e78fc 7f68c6c1b9a9 65189e7d31ed 71790b749215 9e6412863248 4e559448cbec 7264383872ff 659c42600174 584b6ab2c0e0 d027a8a33607 a82e59828796 1bdae6fab8f9 0beca8ac2d3b c595b0a59f0e 832b4901b722 09f490c928ae 68f88034ce87 0db8e2ef374c
	I1002 10:57:45.303066 2249882 command_runner.go:130] > f0ac914e78fc
	I1002 10:57:45.303525 2249882 command_runner.go:130] > 7f68c6c1b9a9
	I1002 10:57:45.303724 2249882 command_runner.go:130] > 65189e7d31ed
	I1002 10:57:45.303735 2249882 command_runner.go:130] > 71790b749215
	I1002 10:57:45.303741 2249882 command_runner.go:130] > 9e6412863248
	I1002 10:57:45.303903 2249882 command_runner.go:130] > 4e559448cbec
	I1002 10:57:45.304068 2249882 command_runner.go:130] > 7264383872ff
	I1002 10:57:45.304077 2249882 command_runner.go:130] > 659c42600174
	I1002 10:57:45.304215 2249882 command_runner.go:130] > 584b6ab2c0e0
	I1002 10:57:45.304225 2249882 command_runner.go:130] > d027a8a33607
	I1002 10:57:45.304333 2249882 command_runner.go:130] > a82e59828796
	I1002 10:57:45.304606 2249882 command_runner.go:130] > 1bdae6fab8f9
	I1002 10:57:45.305075 2249882 command_runner.go:130] > 0beca8ac2d3b
	I1002 10:57:45.305912 2249882 command_runner.go:130] > c595b0a59f0e
	I1002 10:57:45.305923 2249882 command_runner.go:130] > 832b4901b722
	I1002 10:57:45.306082 2249882 command_runner.go:130] > 09f490c928ae
	I1002 10:57:45.306091 2249882 command_runner.go:130] > 68f88034ce87
	I1002 10:57:45.306556 2249882 command_runner.go:130] > 0db8e2ef374c
	I1002 10:57:45.308111 2249882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 10:57:45.324644 2249882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 10:57:45.334773 2249882 command_runner.go:130] > -rw------- 1 root root 5643 Oct  2 10:54 /etc/kubernetes/admin.conf
	I1002 10:57:45.334796 2249882 command_runner.go:130] > -rw------- 1 root root 5652 Oct  2 10:54 /etc/kubernetes/controller-manager.conf
	I1002 10:57:45.334804 2249882 command_runner.go:130] > -rw------- 1 root root 2003 Oct  2 10:54 /etc/kubernetes/kubelet.conf
	I1002 10:57:45.334813 2249882 command_runner.go:130] > -rw------- 1 root root 5604 Oct  2 10:54 /etc/kubernetes/scheduler.conf
	I1002 10:57:45.335996 2249882 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct  2 10:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct  2 10:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Oct  2 10:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct  2 10:54 /etc/kubernetes/scheduler.conf
	
	I1002 10:57:45.336100 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 10:57:45.346075 2249882 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1002 10:57:45.347334 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 10:57:45.357942 2249882 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1002 10:57:45.358020 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 10:57:45.368497 2249882 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:45.368566 2249882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 10:57:45.380096 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 10:57:45.390654 2249882 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:45.390753 2249882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 10:57:45.400778 2249882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 10:57:45.411456 2249882 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 10:57:45.411482 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:45.468444 2249882 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 10:57:45.471105 2249882 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 10:57:45.471990 2249882 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 10:57:45.472699 2249882 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 10:57:45.473650 2249882 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1002 10:57:45.474286 2249882 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1002 10:57:45.474758 2249882 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1002 10:57:45.475432 2249882 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1002 10:57:45.476053 2249882 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1002 10:57:45.476620 2249882 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 10:57:45.477180 2249882 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 10:57:45.477581 2249882 command_runner.go:130] > [certs] Using the existing "sa" key
	I1002 10:57:45.480349 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:45.528573 2249882 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 10:57:45.739295 2249882 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1002 10:57:46.565891 2249882 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1002 10:57:47.154018 2249882 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 10:57:47.607732 2249882 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 10:57:47.611436 2249882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.131024828s)
	I1002 10:57:47.611466 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:47.675609 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:57:47.678169 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:57:47.678422 2249882 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 10:57:47.793144 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:47.860666 2249882 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 10:57:47.860687 2249882 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 10:57:47.873390 2249882 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 10:57:47.874423 2249882 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 10:57:47.877685 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:47.950189 2249882 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 10:57:47.957579 2249882 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:57:47.957649 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:47.974750 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:48.497247 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:48.997063 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:49.496664 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:49.509548 2249882 command_runner.go:130] > 1961
	I1002 10:57:49.511209 2249882 api_server.go:72] duration metric: took 1.553629429s to wait for apiserver process to appear ...
	I1002 10:57:49.511227 2249882 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:57:49.511245 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:53.197381 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 10:57:53.197406 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 10:57:53.197416 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:53.245014 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 10:57:53.245049 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 10:57:53.745692 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:53.754582 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 10:57:53.754610 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 10:57:54.245811 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:54.258506 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 10:57:54.258534 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 10:57:54.745980 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:54.755017 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 10:57:54.755088 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1002 10:57:54.755102 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:54.755112 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:54.755120 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:54.770255 2249882 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1002 10:57:54.770282 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:54.770291 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:54.770298 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:54.770304 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:54.770310 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:54.770317 2249882 round_trippers.go:580]     Content-Length: 263
	I1002 10:57:54.770326 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:54 GMT
	I1002 10:57:54.770332 2249882 round_trippers.go:580]     Audit-Id: 9ac7a985-84dd-49ad-986e-5586e8559991
	I1002 10:57:54.770357 2249882 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1002 10:57:54.770440 2249882 api_server.go:141] control plane version: v1.28.2
	I1002 10:57:54.770459 2249882 api_server.go:131] duration metric: took 5.259224686s to wait for apiserver health ...
	I1002 10:57:54.770467 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:57:54.770476 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:57:54.772601 2249882 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 10:57:54.774150 2249882 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 10:57:54.779163 2249882 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 10:57:54.779187 2249882 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 10:57:54.779198 2249882 command_runner.go:130] > Device: 36h/54d	Inode: 1826972     Links: 1
	I1002 10:57:54.779206 2249882 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:57:54.779213 2249882 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:57:54.779219 2249882 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:57:54.779229 2249882 command_runner.go:130] > Change: 2023-10-02 10:36:11.204484217 +0000
	I1002 10:57:54.779238 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:36:11.160484379 +0000
	I1002 10:57:54.779270 2249882 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 10:57:54.779286 2249882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 10:57:54.816074 2249882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 10:57:55.893633 2249882 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:57:55.898318 2249882 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:57:55.901892 2249882 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 10:57:55.916270 2249882 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 10:57:55.921696 2249882 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.105584462s)
	I1002 10:57:55.921747 2249882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:57:55.921829 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:57:55.921841 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:55.921850 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:55.921856 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:55.926166 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:57:55.926196 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:55.926205 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:55.926213 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:55 GMT
	I1002 10:57:55.926219 2249882 round_trippers.go:580]     Audit-Id: 98ec81ae-9f8e-40e0-ac82-2c30f3929647
	I1002 10:57:55.926225 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:55.926232 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:55.926241 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:55.927072 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"707"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"702","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85500 chars]
	I1002 10:57:55.932374 2249882 system_pods.go:59] 12 kube-system pods found
	I1002 10:57:55.932406 2249882 system_pods.go:61] "coredns-5dd5756b68-s5pf5" [f72cd720-6739-45d2-a014-97b1e19d2574] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:57:55.932416 2249882 system_pods.go:61] "etcd-multinode-899833" [50fafe88-1106-4021-9c0c-7bb9d9d17ffb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 10:57:55.932423 2249882 system_pods.go:61] "kindnet-jbhdj" [82532e9c-9f56-44a1-a627-ec7462b9738f] Running
	I1002 10:57:55.932443 2249882 system_pods.go:61] "kindnet-kp6fb" [260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 10:57:55.932457 2249882 system_pods.go:61] "kindnet-lmfm5" [8790fa37-873d-4ec3-a9b3-020dcc4a8e1d] Running
	I1002 10:57:55.932464 2249882 system_pods.go:61] "kube-apiserver-multinode-899833" [fb05b79f-58ee-4097-aa20-b9721f21d29c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 10:57:55.932473 2249882 system_pods.go:61] "kube-controller-manager-multinode-899833" [92b1c97d-b38b-405b-9e51-272591b87dcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 10:57:55.932484 2249882 system_pods.go:61] "kube-proxy-76wth" [675afe15-d632-48d5-8e1e-af889d799786] Running
	I1002 10:57:55.932492 2249882 system_pods.go:61] "kube-proxy-fjcp8" [2d159cb7-69ca-4b3c-b918-b698bb157220] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 10:57:55.932501 2249882 system_pods.go:61] "kube-proxy-xnhqd" [1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd] Running
	I1002 10:57:55.932507 2249882 system_pods.go:61] "kube-scheduler-multinode-899833" [65999631-952f-42f1-ae73-f32996dc19fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 10:57:55.932515 2249882 system_pods.go:61] "storage-provisioner" [97d5bb7f-502d-4838-a926-c613783c1588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:57:55.932526 2249882 system_pods.go:74] duration metric: took 10.770059ms to wait for pod list to return data ...
	I1002 10:57:55.932534 2249882 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:57:55.932601 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 10:57:55.932609 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:55.932617 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:55.932627 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:55.935347 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:55.935369 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:55.935377 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:55.935388 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:55.935394 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:55.935400 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:55 GMT
	I1002 10:57:55.935407 2249882 round_trippers.go:580]     Audit-Id: 27dfd28a-dcbd-4a0c-82bf-c1751b6e07cf
	I1002 10:57:55.935414 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:55.935875 2249882 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"707"},"items":[{"metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15863 chars]
	I1002 10:57:55.936893 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:57:55.936923 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:57:55.936934 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:57:55.936944 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:57:55.936949 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:57:55.936957 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:57:55.936961 2249882 node_conditions.go:105] duration metric: took 4.418741ms to run NodePressure ...
	I1002 10:57:55.936979 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:56.097301 2249882 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 10:57:56.198512 2249882 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 10:57:56.202091 2249882 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 10:57:56.202188 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1002 10:57:56.202194 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.202203 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.202210 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.206618 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:57:56.206637 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.206645 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.206652 2249882 round_trippers.go:580]     Audit-Id: 4da4d3ed-5d39-4519-9288-6ac9ca8fe820
	I1002 10:57:56.206658 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.206664 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.206670 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.206676 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.207659 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"714"},"items":[{"metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"698","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31430 chars]
	I1002 10:57:56.209150 2249882 kubeadm.go:787] kubelet initialised
	I1002 10:57:56.209194 2249882 kubeadm.go:788] duration metric: took 7.08354ms waiting for restarted kubelet to initialise ...
	I1002 10:57:56.209220 2249882 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:57:56.209327 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:57:56.209355 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.209377 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.209401 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.215518 2249882 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 10:57:56.215536 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.215544 2249882 round_trippers.go:580]     Audit-Id: e783ef9b-6eb5-4c1e-bf6d-a25c93f38237
	I1002 10:57:56.215550 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.215556 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.215562 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.215568 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.215574 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.217626 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"714"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85087 chars]
	I1002 10:57:56.221119 2249882 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:57:56.221202 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:56.221209 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.221217 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.221225 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.224561 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:57:56.224608 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.224628 2249882 round_trippers.go:580]     Audit-Id: 22f50135-0f51-4085-b064-f5a395fc1ecf
	I1002 10:57:56.224650 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.224686 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.224708 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.224729 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.224750 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.225241 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:56.225840 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:56.225875 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.225898 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.225928 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.228247 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.228285 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.228305 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.228328 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.228362 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.228383 2249882 round_trippers.go:580]     Audit-Id: 4e0c0ca6-c0fd-405d-be62-b0c025c7eecc
	I1002 10:57:56.228403 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.228423 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.228639 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:56.229086 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:56.229130 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.229151 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.229173 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.232794 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:57:56.232840 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.232863 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.232885 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.232916 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.232939 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.232960 2249882 round_trippers.go:580]     Audit-Id: 16f10759-ca9d-4927-957a-f9823e10c897
	I1002 10:57:56.232981 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.233157 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:56.233807 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:56.233844 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.233866 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.233888 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.236073 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.236114 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.236135 2249882 round_trippers.go:580]     Audit-Id: 0aabe7f9-662e-40b8-94d4-d546e9df96b7
	I1002 10:57:56.236155 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.236190 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.236212 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.236230 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.236251 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.236401 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:56.737112 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:56.737132 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.737141 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.737164 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.739617 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.739641 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.739650 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.739657 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.739663 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.739669 2249882 round_trippers.go:580]     Audit-Id: c6d0cb3e-5db2-4303-b733-ea0f19a8c3de
	I1002 10:57:56.739680 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.739686 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.739878 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:56.740413 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:56.740430 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.740438 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.740445 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.742587 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.742607 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.742615 2249882 round_trippers.go:580]     Audit-Id: c6d6e03f-1091-4094-881f-1e2ff28d5598
	I1002 10:57:56.742622 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.742628 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.742651 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.742663 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.742670 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.742915 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:57.237001 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:57.237025 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.237035 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.237042 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.239704 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.239778 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.239800 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.239824 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.239860 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.239881 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.239903 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.239965 2249882 round_trippers.go:580]     Audit-Id: b377ef5a-1cf9-4f5b-bd01-187b5dce5d09
	I1002 10:57:57.240114 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:57.240671 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:57.240688 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.240697 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.240704 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.243039 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.243096 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.243116 2249882 round_trippers.go:580]     Audit-Id: 286c7bfd-5cf9-40b5-8d69-ad3059fc8fca
	I1002 10:57:57.243138 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.243173 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.243195 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.243215 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.243236 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.243387 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:57.737510 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:57.737535 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.737545 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.737552 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.740282 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.740305 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.740313 2249882 round_trippers.go:580]     Audit-Id: 9908d72d-92a6-44a9-963b-732bf8f019c7
	I1002 10:57:57.740320 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.740326 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.740332 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.740338 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.740345 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.740731 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:57.741370 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:57.741388 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.741399 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.741406 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.743766 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.743784 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.743791 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.743798 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.743804 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.743810 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.743816 2249882 round_trippers.go:580]     Audit-Id: 1eb46903-0b4e-48e8-9ce5-1e390482547f
	I1002 10:57:57.743823 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.743948 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:58.237027 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:58.237051 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.237060 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.237067 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.239856 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.239881 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.239890 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.239898 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.239912 2249882 round_trippers.go:580]     Audit-Id: 9925a870-4874-4f4b-8d61-1486ed1394e2
	I1002 10:57:58.239919 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.239929 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.239941 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.240435 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:58.240978 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:58.240991 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.240999 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.241006 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.243342 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.243366 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.243374 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.243382 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.243388 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.243394 2249882 round_trippers.go:580]     Audit-Id: 4d430885-95bd-45cf-aa68-1352adc12543
	I1002 10:57:58.243405 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.243411 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.243941 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:58.244322 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:57:58.737004 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:58.737026 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.737037 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.737044 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.739645 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.739666 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.739674 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.739681 2249882 round_trippers.go:580]     Audit-Id: 18b520d8-6b80-42ae-bddc-3c5ef3a7f198
	I1002 10:57:58.739687 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.739694 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.739699 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.739706 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.739968 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:58.740540 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:58.740557 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.740567 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.740574 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.743004 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.743022 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.743031 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.743038 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.743044 2249882 round_trippers.go:580]     Audit-Id: 561b9bee-35a5-4f48-8917-fdb8530865c3
	I1002 10:57:58.743050 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.743057 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.743063 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.743235 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:59.237135 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:59.237157 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.237168 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.237175 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.240119 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:59.240148 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.240157 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.240164 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.240171 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.240177 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.240184 2249882 round_trippers.go:580]     Audit-Id: 7c9892ee-14bb-452f-82b9-3f8815279e73
	I1002 10:57:59.240191 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.240313 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:59.240954 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:59.240973 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.240983 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.240991 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.245987 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:57:59.246010 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.246018 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.246024 2249882 round_trippers.go:580]     Audit-Id: 1cf2948c-6d0c-4a19-9a03-8d6878a7d405
	I1002 10:57:59.246031 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.246037 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.246043 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.246049 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.246184 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:59.737045 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:59.737070 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.737081 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.737089 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.739950 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:59.740082 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.740208 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.740222 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.740229 2249882 round_trippers.go:580]     Audit-Id: 9b6ad5f0-8394-42e6-ad03-3ceda1a221df
	I1002 10:57:59.740235 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.740254 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.740275 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.740467 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:59.741022 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:59.741039 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.741059 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.741067 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.743522 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:59.743539 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.743547 2249882 round_trippers.go:580]     Audit-Id: 4e860e8a-f2ec-4674-b038-1a9aa304c4a1
	I1002 10:57:59.743553 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.743565 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.743584 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.743591 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.743603 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.743846 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:00.236990 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:00.237015 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.237025 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.237033 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.240291 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:00.240321 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.240343 2249882 round_trippers.go:580]     Audit-Id: e2c846ed-da4b-4ad9-a645-218760c6f7e4
	I1002 10:58:00.240350 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.240359 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.240365 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.240372 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.240381 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.240584 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:00.241135 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:00.241153 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.241162 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.241169 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.243773 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:00.243795 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.243803 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.243810 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.243816 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.243822 2249882 round_trippers.go:580]     Audit-Id: 280392ad-8049-4a65-9ddb-0cc00624e4cc
	I1002 10:58:00.243828 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.243835 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.243994 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:00.244380 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:00.737667 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:00.737702 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.737713 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.737721 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.740329 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:00.740348 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.740359 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.740366 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.740372 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.740378 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.740385 2249882 round_trippers.go:580]     Audit-Id: 531e8243-b389-49ae-a19a-37d070cd580a
	I1002 10:58:00.740391 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.740499 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:00.741049 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:00.741066 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.741075 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.741088 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.743323 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:00.743341 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.743349 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.743356 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.743362 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.743370 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.743380 2249882 round_trippers.go:580]     Audit-Id: 4a3c0606-2207-4e13-b926-42c9205e0271
	I1002 10:58:00.743386 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.743625 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:01.237724 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:01.237746 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.237755 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.237766 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.240747 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.240773 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.240783 2249882 round_trippers.go:580]     Audit-Id: 6ca1fdf5-d511-4927-a9c7-3a7920a9db0c
	I1002 10:58:01.240790 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.240796 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.240835 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.240848 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.240855 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.241104 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:01.241726 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:01.241743 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.241752 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.241760 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.244325 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.244360 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.244369 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.244376 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.244382 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.244388 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.244394 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.244400 2249882 round_trippers.go:580]     Audit-Id: 42cce81d-bb60-42ad-b770-d7af9a70669c
	I1002 10:58:01.244534 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:01.737647 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:01.737672 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.737684 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.737692 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.740620 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.740742 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.740761 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.740769 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.740775 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.740794 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.740812 2249882 round_trippers.go:580]     Audit-Id: 46addf17-480e-4f41-bae0-c7ed80a68673
	I1002 10:58:01.740819 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.740933 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:01.741501 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:01.741519 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.741528 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.741538 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.743939 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.743966 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.743975 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.744001 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.744010 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.744020 2249882 round_trippers.go:580]     Audit-Id: 70bd939e-bb94-4ff3-a01e-4a6397a07172
	I1002 10:58:01.744026 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.744038 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.744192 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:02.237217 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:02.237245 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.237290 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.237299 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.240055 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.240115 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.240145 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.240159 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.240166 2249882 round_trippers.go:580]     Audit-Id: db0be2ea-7760-4b43-b991-7092331f1993
	I1002 10:58:02.240185 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.240196 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.240203 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.240404 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:02.241038 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:02.241059 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.241068 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.241075 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.243614 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.243640 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.243702 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.243719 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.243727 2249882 round_trippers.go:580]     Audit-Id: 308414a9-6252-414b-9c41-76c1578c5d05
	I1002 10:58:02.243741 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.243748 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.243755 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.244012 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:02.244392 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:02.737099 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:02.737123 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.737134 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.737141 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.739960 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.740075 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.740090 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.740100 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.740107 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.740116 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.740125 2249882 round_trippers.go:580]     Audit-Id: c66b21f6-9180-400b-8305-78d59def8537
	I1002 10:58:02.740134 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.740264 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:02.740895 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:02.740916 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.740928 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.740936 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.743484 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.743547 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.743571 2249882 round_trippers.go:580]     Audit-Id: 1a566655-ad28-4942-b01e-89ae12782aad
	I1002 10:58:02.743593 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.743631 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.743653 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.743666 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.743672 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.743803 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:03.237015 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:03.237037 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.237049 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.237056 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.239628 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.239691 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.239714 2249882 round_trippers.go:580]     Audit-Id: 92a42f5d-41aa-415d-a745-c5842fe185be
	I1002 10:58:03.239737 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.239773 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.239786 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.239794 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.239800 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.239960 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:03.240521 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:03.240540 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.240548 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.240561 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.242674 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.242709 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.242717 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.242724 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.242735 2249882 round_trippers.go:580]     Audit-Id: e9de74e4-6832-4fbd-a028-4a36a42614f3
	I1002 10:58:03.242747 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.242754 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.242768 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.242907 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:03.738012 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:03.738039 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.738049 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.738056 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.740601 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.740669 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.740754 2249882 round_trippers.go:580]     Audit-Id: f00e9cfe-b5f3-4c83-ba3e-865caa96060f
	I1002 10:58:03.740783 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.740795 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.740802 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.740809 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.740815 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.740916 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:03.741485 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:03.741505 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.741513 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.741520 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.743761 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.743779 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.743787 2249882 round_trippers.go:580]     Audit-Id: d7db2ecf-08dc-45a5-abfa-58b6f6877907
	I1002 10:58:03.743794 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.743800 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.743806 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.743813 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.743823 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.744136 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:04.237620 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:04.237649 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.237659 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.237673 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.240876 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:04.240934 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.240956 2249882 round_trippers.go:580]     Audit-Id: af7c4c94-819e-4d98-87dd-e2b1549b6a7d
	I1002 10:58:04.240979 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.241016 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.241040 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.241061 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.241082 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.241228 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:04.241801 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:04.241820 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.241828 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.241835 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.244078 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:04.244127 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.244150 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.244173 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.244207 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.244223 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.244230 2249882 round_trippers.go:580]     Audit-Id: 88986ffc-b1e0-41d5-bdbe-448c765f8046
	I1002 10:58:04.244237 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.244379 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:04.244748 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:04.737246 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:04.737289 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.737299 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.737306 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.739908 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:04.739970 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.739992 2249882 round_trippers.go:580]     Audit-Id: 1d61dbb5-8f42-4384-9081-0267efaa8427
	I1002 10:58:04.740015 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.740028 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.740050 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.740058 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.740065 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.740200 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:04.740747 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:04.740762 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.740771 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.740778 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.743057 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:04.743079 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.743087 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.743094 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.743101 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.743107 2249882 round_trippers.go:580]     Audit-Id: d88e1388-ac95-49f4-ac8c-ed76e47293b0
	I1002 10:58:04.743113 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.743124 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.743431 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:05.237570 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:05.237594 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.237604 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.237612 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.240269 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.240336 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.240353 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.240361 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.240367 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.240374 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.240380 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.240386 2249882 round_trippers.go:580]     Audit-Id: c7ad9ed8-174a-4e3a-bea5-f94e3fe0430f
	I1002 10:58:05.240568 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:05.241121 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:05.241136 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.241145 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.241152 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.243543 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.243566 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.243578 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.243585 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.243591 2249882 round_trippers.go:580]     Audit-Id: a80d0eb6-e476-47f0-9c76-98f71f404765
	I1002 10:58:05.243597 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.243607 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.243620 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.243743 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:05.737739 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:05.737772 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.737783 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.737790 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.740515 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.740539 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.740547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.740553 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.740560 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.740566 2249882 round_trippers.go:580]     Audit-Id: 6f330d73-cb69-4c0f-96a5-33c500fa2a29
	I1002 10:58:05.740572 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.740578 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.740944 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:05.741544 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:05.741563 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.741573 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.741580 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.743879 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.743942 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.743964 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.743986 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.744021 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.744049 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.744073 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.744096 2249882 round_trippers.go:580]     Audit-Id: 59e0f04d-7f8e-482d-8d41-0bb84b3c5101
	I1002 10:58:05.744579 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:06.237776 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:06.237799 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.237808 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.237816 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.240295 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.240332 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.240340 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.240347 2249882 round_trippers.go:580]     Audit-Id: 956dfcbf-5eab-4e19-b386-c0a0f2d6eace
	I1002 10:58:06.240353 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.240359 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.240365 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.240372 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.240565 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:06.241222 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:06.241248 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.241281 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.241291 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.243552 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.243576 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.243587 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.243594 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.243600 2249882 round_trippers.go:580]     Audit-Id: a959105f-4af9-4097-8008-0b96ce522c3f
	I1002 10:58:06.243607 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.243616 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.243630 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.243770 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:06.737590 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:06.737615 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.737625 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.737632 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.740140 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.740166 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.740182 2249882 round_trippers.go:580]     Audit-Id: 46680136-1c11-47d4-a37b-ffcee01f8c19
	I1002 10:58:06.740189 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.740196 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.740202 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.740208 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.740218 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.740361 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:06.740903 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:06.740920 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.740929 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.740937 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.743240 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.743261 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.743269 2249882 round_trippers.go:580]     Audit-Id: 8f226841-d387-499e-b255-ae1f418305cc
	I1002 10:58:06.743275 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.743281 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.743296 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.743303 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.743309 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.743448 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:06.743811 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:07.237842 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:07.237866 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.237877 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.237884 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.240379 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.240403 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.240412 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.240418 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.240425 2249882 round_trippers.go:580]     Audit-Id: 51eb0bfe-4320-4dd4-bb22-6f5b2240fe4d
	I1002 10:58:07.240431 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.240437 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.240444 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.240624 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:07.241150 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:07.241164 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.241173 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.241180 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.243555 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.243576 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.243584 2249882 round_trippers.go:580]     Audit-Id: 7ef87613-2ec3-45f1-a923-f8fab06e1def
	I1002 10:58:07.243591 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.243597 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.243603 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.243613 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.243620 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.243727 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:07.737848 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:07.737871 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.737881 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.737888 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.740589 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.740658 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.740681 2249882 round_trippers.go:580]     Audit-Id: 1af53379-9077-49e3-9657-50b75f9b7c15
	I1002 10:58:07.740704 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.740741 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.740767 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.740791 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.740819 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.740930 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:07.741511 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:07.741528 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.741537 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.741544 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.743957 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.744020 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.744057 2249882 round_trippers.go:580]     Audit-Id: 894c1c3a-4430-4b47-9e56-3384565e9850
	I1002 10:58:07.744116 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.744143 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.744155 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.744162 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.744182 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.744335 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:08.237728 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:08.237830 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.237854 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.237876 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.240764 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.240831 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.240854 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.240877 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.240923 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.240965 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.241003 2249882 round_trippers.go:580]     Audit-Id: c44c431e-1dc4-4af9-8b33-06a8276292a5
	I1002 10:58:08.241027 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.241808 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:08.242575 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:08.242626 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.242650 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.242671 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.245142 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.245197 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.245219 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.245295 2249882 round_trippers.go:580]     Audit-Id: 2253c49c-b4ce-4dbd-aaf6-4a9b0c051ba8
	I1002 10:58:08.245321 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.245344 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.245368 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.245402 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.245569 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:08.737589 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:08.737612 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.737621 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.737628 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.740299 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.740380 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.740406 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.740414 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.740423 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.740430 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.740439 2249882 round_trippers.go:580]     Audit-Id: a9ce97fd-6f4f-43dd-a232-e7651d54d6f8
	I1002 10:58:08.740446 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.740553 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:08.741095 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:08.741110 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.741118 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.741125 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.743344 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.743402 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.743423 2249882 round_trippers.go:580]     Audit-Id: 1d16c211-6916-4567-be29-144b8b56754e
	I1002 10:58:08.743444 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.743471 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.743480 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.743486 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.743492 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.743621 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:08.744009 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:09.237034 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:09.237057 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.237067 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.237074 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.239673 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.239754 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.239772 2249882 round_trippers.go:580]     Audit-Id: 33839bdb-43f4-428d-b5a3-5e6b3e7dd972
	I1002 10:58:09.239780 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.239786 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.239793 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.239799 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.239811 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.239967 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:09.240519 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:09.240535 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.240543 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.240550 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.242834 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.242862 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.242882 2249882 round_trippers.go:580]     Audit-Id: e2930e3d-2447-45e9-b5d4-6f408a3e417a
	I1002 10:58:09.242889 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.242896 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.242902 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.242908 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.242919 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.243063 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:09.737106 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:09.737130 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.737144 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.737153 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.740118 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.740185 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.740209 2249882 round_trippers.go:580]     Audit-Id: a16b6785-4281-4ce5-a74b-280f77c56faa
	I1002 10:58:09.740232 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.740264 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.740290 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.740375 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.740398 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.740516 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:09.741113 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:09.741138 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.741146 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.741153 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.743782 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.743804 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.743812 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.743819 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.743825 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.743832 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.743846 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.743852 2249882 round_trippers.go:580]     Audit-Id: 18b2f6fb-9dc2-44df-963a-70f7f3891ff4
	I1002 10:58:09.743980 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:10.237050 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:10.237075 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.237085 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.237092 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.239797 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.239855 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.239879 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.239904 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.239939 2249882 round_trippers.go:580]     Audit-Id: 827fb503-c740-45aa-a479-58fdbf3a35f1
	I1002 10:58:10.239951 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.239958 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.239964 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.240113 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:10.240650 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:10.240665 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.240673 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.240680 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.242952 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.243012 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.243033 2249882 round_trippers.go:580]     Audit-Id: 25aa6b4f-f21a-4122-8366-e49e6d548075
	I1002 10:58:10.243054 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.243090 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.243116 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.243139 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.243177 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.243327 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:10.737422 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:10.737446 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.737455 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.737463 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.740137 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.740197 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.740212 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.740220 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.740226 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.740233 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.740239 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.740250 2249882 round_trippers.go:580]     Audit-Id: 594eb6a0-57d6-4505-8242-83e711c61e8a
	I1002 10:58:10.740549 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:10.741122 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:10.741166 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.741182 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.741189 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.743390 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.743450 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.743471 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.743514 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.743539 2249882 round_trippers.go:580]     Audit-Id: be74786f-bcae-4290-a603-5d0f3a9d07e4
	I1002 10:58:10.743552 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.743559 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.743565 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.743706 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:10.744094 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:11.237034 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:11.237057 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.237066 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.237074 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.239881 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.239904 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.239913 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.239921 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.239927 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.239933 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.239939 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.239945 2249882 round_trippers.go:580]     Audit-Id: e0568670-f66c-4ce8-a381-4af93b1d24e3
	I1002 10:58:11.240038 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:11.240566 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:11.240581 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.240589 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.240596 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.242952 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.243040 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.243063 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.243097 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.243124 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.243137 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.243144 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.243150 2249882 round_trippers.go:580]     Audit-Id: 3757f889-1e43-4450-bbaf-706de8aa9ae7
	I1002 10:58:11.243277 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:11.737592 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:11.737616 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.737625 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.737633 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.740397 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.740475 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.740493 2249882 round_trippers.go:580]     Audit-Id: a412966a-41ef-469e-a606-c9a2e422169d
	I1002 10:58:11.740501 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.740507 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.740513 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.740519 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.740529 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.740627 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:11.741155 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:11.741171 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.741180 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.741188 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.743371 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.743392 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.743402 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.743409 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.743415 2249882 round_trippers.go:580]     Audit-Id: 1cb3a278-0245-432d-9c3a-fafaa6c317d5
	I1002 10:58:11.743422 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.743428 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.743445 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.743673 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:12.237830 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:12.237870 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.237881 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.237888 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.240503 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.240524 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.240532 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.240538 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.240545 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.240551 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.240557 2249882 round_trippers.go:580]     Audit-Id: 7961bfe7-2822-4d2c-82ec-fc048e11af83
	I1002 10:58:12.240567 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.240724 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:12.241285 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:12.241302 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.241310 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.241318 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.243635 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.243653 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.243660 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.243667 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.243673 2249882 round_trippers.go:580]     Audit-Id: 5d70fbe1-e23e-427c-9485-4f125ff6d535
	I1002 10:58:12.243679 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.243685 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.243691 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.243985 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:12.737613 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:12.737636 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.737646 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.737653 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.740544 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.740569 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.740579 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.740586 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.740592 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.740598 2249882 round_trippers.go:580]     Audit-Id: e21e0b48-feee-4033-81ff-423e59a72eee
	I1002 10:58:12.740604 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.740610 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.740793 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:12.741377 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:12.741394 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.741403 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.741411 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.743532 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.743590 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.743613 2249882 round_trippers.go:580]     Audit-Id: 138d2d6e-d6e3-4a27-a260-68f66631c1bd
	I1002 10:58:12.743637 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.743674 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.743700 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.743722 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.743760 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.743909 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:12.744285 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:13.237039 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:13.237059 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.237070 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.237078 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.239911 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.239976 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.239998 2249882 round_trippers.go:580]     Audit-Id: e1a8f57f-d406-404e-9d82-b5cac2e92919
	I1002 10:58:13.240021 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.240055 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.240084 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.240105 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.240128 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.240287 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:13.240826 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:13.240843 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.240851 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.240858 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.243386 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.243452 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.243474 2249882 round_trippers.go:580]     Audit-Id: cfa05969-ab15-40bb-8ed4-8aa338f31b54
	I1002 10:58:13.243496 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.243534 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.243547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.243554 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.243560 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.243683 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:13.737106 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:13.737173 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.737199 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.737208 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.739784 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.739900 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.739931 2249882 round_trippers.go:580]     Audit-Id: 51f80281-e6e7-455f-b717-cc5c77be1988
	I1002 10:58:13.739940 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.739947 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.739954 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.739964 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.739970 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.740066 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:13.740606 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:13.740622 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.740630 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.740637 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.742891 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.742909 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.742916 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.742924 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.742930 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.742937 2249882 round_trippers.go:580]     Audit-Id: c15fe7b3-5fd9-40c5-9e69-7393d0a2ee62
	I1002 10:58:13.742943 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.742950 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.743095 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:14.237736 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:14.237761 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.237771 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.237778 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.240247 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.240281 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.240289 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.240295 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.240302 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.240308 2249882 round_trippers.go:580]     Audit-Id: e4e6a75a-c59d-4c2e-b0cd-e450781bf73f
	I1002 10:58:14.240314 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.240320 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.240518 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:14.241039 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:14.241057 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.241065 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.241073 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.243206 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.243223 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.243230 2249882 round_trippers.go:580]     Audit-Id: 5ef814ab-b1ee-4ed9-af32-3928dc2db88c
	I1002 10:58:14.243237 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.243243 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.243249 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.243255 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.243261 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.243423 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:14.737416 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:14.737441 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.737451 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.737466 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.740075 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.740144 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.740166 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.740185 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.740223 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.740251 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.740274 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.740309 2249882 round_trippers.go:580]     Audit-Id: 5c53c153-0388-4aae-b90d-117bc3e2ec9a
	I1002 10:58:14.740421 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:14.740957 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:14.740973 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.740982 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.740989 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.743192 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.743213 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.743221 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.743228 2249882 round_trippers.go:580]     Audit-Id: 30fb4fd7-eb05-40fe-a0b8-d254f91bc4e6
	I1002 10:58:14.743234 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.743240 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.743246 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.743252 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.743567 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:15.237175 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:15.237201 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.237211 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.237218 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.239846 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.239921 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.239930 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.239937 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.239943 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.239949 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.239955 2249882 round_trippers.go:580]     Audit-Id: f390e515-8aa1-425b-bb93-7e5c595edf99
	I1002 10:58:15.239961 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.240079 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:15.240630 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:15.240645 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.240653 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.240660 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.243047 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.243068 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.243078 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.243084 2249882 round_trippers.go:580]     Audit-Id: 3ef698a0-4296-4b25-978b-112409f48c0c
	I1002 10:58:15.243090 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.243096 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.243101 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.243108 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.243243 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:15.243599 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:15.737371 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:15.737397 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.737407 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.737415 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.740356 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.740439 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.740532 2249882 round_trippers.go:580]     Audit-Id: bbe8ddc9-a415-4272-9f87-2cffa8127242
	I1002 10:58:15.740547 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.740555 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.740565 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.740571 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.740578 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.740685 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:15.741362 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:15.741380 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.741392 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.741403 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.743985 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.744004 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.744013 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.744019 2249882 round_trippers.go:580]     Audit-Id: 33abc408-fbf8-4ea3-a43a-c15bd8929996
	I1002 10:58:15.744025 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.744031 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.744038 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.744044 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.744208 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:16.237906 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:16.237939 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.237955 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.237966 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.240809 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.240831 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.240840 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.240846 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.240855 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.240862 2249882 round_trippers.go:580]     Audit-Id: 47147a9f-8ca0-4944-8eae-9e63aaff490a
	I1002 10:58:16.240870 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.240884 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.241171 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:16.241832 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:16.241848 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.241858 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.241865 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.244039 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.244056 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.244064 2249882 round_trippers.go:580]     Audit-Id: a96203e1-cad1-41c1-a867-9b5e4ef1187f
	I1002 10:58:16.244070 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.244076 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.244083 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.244089 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.244095 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.244219 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:16.736983 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:16.737008 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.737019 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.737026 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.739505 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.739575 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.739584 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.739593 2249882 round_trippers.go:580]     Audit-Id: 4f4b6cbc-7f6c-493d-940c-987b904f63d9
	I1002 10:58:16.739599 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.739605 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.739611 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.739618 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.739715 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:16.740267 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:16.740284 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.740293 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.740301 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.742448 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.742466 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.742474 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.742480 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.742486 2249882 round_trippers.go:580]     Audit-Id: 4b4b7d0f-302c-4e92-86c0-0162e8776bfb
	I1002 10:58:16.742492 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.742498 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.742504 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.742647 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:17.236935 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:17.236959 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.236969 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.236976 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.239401 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.239426 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.239435 2249882 round_trippers.go:580]     Audit-Id: 94a6ac79-5713-4be8-96c1-adef626d2f5c
	I1002 10:58:17.239441 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.239448 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.239454 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.239460 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.239467 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.239594 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:17.240129 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:17.240146 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.240154 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.240160 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.242319 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.242337 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.242345 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.242351 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.242358 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.242364 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.242370 2249882 round_trippers.go:580]     Audit-Id: ea3e5019-a17e-4774-9030-c3f583df5ec6
	I1002 10:58:17.242376 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.242553 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:17.737050 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:17.737074 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.737083 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.737090 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.739647 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.739673 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.739682 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.739689 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.739700 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.739707 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.739714 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.739721 2249882 round_trippers.go:580]     Audit-Id: 7bdfb54a-e77a-4bad-b9ea-4061ae23877b
	I1002 10:58:17.739840 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:17.740385 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:17.740399 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.740408 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.740419 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.742685 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.742703 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.742711 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.742717 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.742724 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.742730 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.742736 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.742742 2249882 round_trippers.go:580]     Audit-Id: ef6d10d0-4c16-4cf0-a70c-8c91f6dc06ca
	I1002 10:58:17.742864 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:17.743228 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:18.237234 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:18.237319 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.237334 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.237350 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.239730 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.239754 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.239763 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.239770 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.239776 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.239783 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.239789 2249882 round_trippers.go:580]     Audit-Id: 0113e1e1-56db-4a3d-a9b4-a5a3aea4042f
	I1002 10:58:18.239796 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.240034 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:18.240591 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:18.240609 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.240620 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.240629 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.242881 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.242902 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.242909 2249882 round_trippers.go:580]     Audit-Id: 443f1ed4-b952-44a5-8ef4-947d58b9bd1b
	I1002 10:58:18.242917 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.242923 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.242929 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.242935 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.242944 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.243120 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:18.737041 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:18.737062 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.737071 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.737078 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.739712 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.739736 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.739744 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.739750 2249882 round_trippers.go:580]     Audit-Id: 009dea86-f617-4790-aec8-03ef6a252a5a
	I1002 10:58:18.739756 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.739763 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.739769 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.739781 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.739920 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:18.740449 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:18.740467 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.740475 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.740482 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.742702 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.742721 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.742729 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.742735 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.742742 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.742748 2249882 round_trippers.go:580]     Audit-Id: 568a3f09-7d9b-4cd7-be0f-132a91fb4100
	I1002 10:58:18.742754 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.742760 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.742890 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:19.237814 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:19.237838 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.237848 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.237855 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.240430 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.240460 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.240476 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.240483 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.240490 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.240497 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.240508 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.240518 2249882 round_trippers.go:580]     Audit-Id: d56f239e-3f29-4f4a-880c-4818fe58c493
	I1002 10:58:19.240719 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:19.241283 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:19.241295 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.241304 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.241310 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.243599 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.243618 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.243626 2249882 round_trippers.go:580]     Audit-Id: eb2c858f-2973-4607-bdf6-fd1cd95c5c69
	I1002 10:58:19.243632 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.243639 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.243645 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.243651 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.243657 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.243770 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:19.737600 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:19.737621 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.737634 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.737641 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.740116 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.740141 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.740150 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.740157 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.740164 2249882 round_trippers.go:580]     Audit-Id: d3c18718-75cb-43ac-84c5-91cdd167bd94
	I1002 10:58:19.740170 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.740180 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.740186 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.740463 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:19.741001 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:19.741018 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.741027 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.741035 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.743289 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.743346 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.743369 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.743392 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.743428 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.743453 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.743475 2249882 round_trippers.go:580]     Audit-Id: f923aed8-b6f4-4903-a0a5-28872916af38
	I1002 10:58:19.743513 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.743675 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:19.744083 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:20.237113 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:20.237137 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.237147 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.237155 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.239916 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.239986 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.240008 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.240030 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.240062 2249882 round_trippers.go:580]     Audit-Id: e8104e7c-c03f-4caf-96a1-c58c6d6d8e56
	I1002 10:58:20.240072 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.240078 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.240085 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.240208 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:20.240741 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:20.240759 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.240767 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.240774 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.243043 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.243067 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.243075 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.243082 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.243088 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.243094 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.243101 2249882 round_trippers.go:580]     Audit-Id: bd3072fe-e7e3-4482-aa04-98c8b6c488c3
	I1002 10:58:20.243107 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.243323 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:20.737028 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:20.737053 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.737062 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.737069 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.739547 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.739569 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.739577 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.739584 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.739590 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.739596 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.739606 2249882 round_trippers.go:580]     Audit-Id: a4b9a133-ce48-4244-ba4a-041549bf288d
	I1002 10:58:20.739613 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.739927 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:20.740495 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:20.740509 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.740517 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.740524 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.742656 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.742674 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.742682 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.742688 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.742695 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.742702 2249882 round_trippers.go:580]     Audit-Id: 5cf11ef2-8888-4a79-9f3a-3a80f61c46cd
	I1002 10:58:20.742711 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.742717 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.742908 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:21.237695 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:21.237717 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.237727 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.237734 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.240260 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.240281 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.240290 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.240296 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.240304 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.240310 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.240316 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.240323 2249882 round_trippers.go:580]     Audit-Id: d4db00bc-8449-414a-8343-c7136fdf75ef
	I1002 10:58:21.240454 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:21.240985 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:21.240995 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.241004 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.241010 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.243166 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.243185 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.243193 2249882 round_trippers.go:580]     Audit-Id: 1de028d2-3986-4493-aab0-4c99dfae4c91
	I1002 10:58:21.243199 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.243205 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.243211 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.243217 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.243225 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.243369 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:21.737525 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:21.737549 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.737559 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.737566 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.740250 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.740319 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.740358 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.740373 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.740380 2249882 round_trippers.go:580]     Audit-Id: c7a11823-3a1f-43dd-9a98-af8ef8f3df1f
	I1002 10:58:21.740387 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.740393 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.740401 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.740517 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:21.741054 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:21.741070 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.741078 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.741086 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.743344 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.743368 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.743377 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.743383 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.743389 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.743405 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.743414 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.743420 2249882 round_trippers.go:580]     Audit-Id: 741873c5-325f-418d-a6dd-39196e08d315
	I1002 10:58:21.743541 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:22.237790 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:22.237815 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.237828 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.237836 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.241175 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:22.241197 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.241207 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.241214 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.241220 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.241226 2249882 round_trippers.go:580]     Audit-Id: c718ba5e-197e-4887-8064-3fb27c840671
	I1002 10:58:22.241232 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.241238 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.241435 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:22.241997 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:22.242015 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.242024 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.242033 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.244338 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:22.244358 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.244367 2249882 round_trippers.go:580]     Audit-Id: 8d8b51c2-368f-4ba7-992f-0eec0fa72476
	I1002 10:58:22.244373 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.244380 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.244386 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.244393 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.244399 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.244519 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:22.244899 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:22.737780 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:22.737811 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.737826 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.737839 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.740520 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:22.740546 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.740554 2249882 round_trippers.go:580]     Audit-Id: f98abc6f-169b-4456-858c-74059c452b89
	I1002 10:58:22.740564 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.740671 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.740699 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.740706 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.740729 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.740882 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:22.741581 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:22.741598 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.741607 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.741614 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.743881 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:22.743906 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.743914 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.743920 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.743927 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.743934 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.743940 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.743951 2249882 round_trippers.go:580]     Audit-Id: 0e370485-a24b-454a-b2b9-079bf8420451
	I1002 10:58:22.744145 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:23.237922 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:23.237963 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.237972 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.237979 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.240492 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:23.240511 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.240518 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.240525 2249882 round_trippers.go:580]     Audit-Id: 38d88d34-3f6e-4ac5-a47c-f1ddde346844
	I1002 10:58:23.240531 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.240536 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.240543 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.240552 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.240686 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:23.241284 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:23.241297 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.241305 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.241312 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.243483 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:23.243499 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.243545 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.243562 2249882 round_trippers.go:580]     Audit-Id: 90e79152-6a7d-45b9-b7bd-11441565bd82
	I1002 10:58:23.243568 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.243575 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.243581 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.243615 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.243752 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:23.737059 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:23.737085 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.737095 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.737102 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.740012 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:23.740037 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.740046 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.740053 2249882 round_trippers.go:580]     Audit-Id: 68a42a4f-29be-47d2-b854-541ca1499db7
	I1002 10:58:23.740059 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.740091 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.740097 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.740103 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.740286 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:23.741368 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:23.741382 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.741400 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.741408 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.747710 2249882 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 10:58:23.747738 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.747746 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.747753 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.747759 2249882 round_trippers.go:580]     Audit-Id: 64653b45-596c-4cb6-be33-af73746aad86
	I1002 10:58:23.747765 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.747772 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.747781 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.747897 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:24.237459 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:24.237485 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.237495 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.237502 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.240253 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.240288 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.240297 2249882 round_trippers.go:580]     Audit-Id: c355c40c-1d18-49c0-9b4d-2e78ca3e39e5
	I1002 10:58:24.240304 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.240310 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.240317 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.240323 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.240329 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.240515 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:24.241060 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:24.241078 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.241087 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.241095 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.243428 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.243455 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.243463 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.243469 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.243475 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.243484 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.243493 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.243500 2249882 round_trippers.go:580]     Audit-Id: 2b42de9c-3541-4345-a457-0f080f60de97
	I1002 10:58:24.243653 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:24.737422 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:24.737447 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.737458 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.737465 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.740192 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.740273 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.740291 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.740301 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.740307 2249882 round_trippers.go:580]     Audit-Id: 15b161c3-bf37-4135-96bf-e9bddea1aafd
	I1002 10:58:24.740314 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.740336 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.740350 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.740550 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:24.741118 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:24.741136 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.741145 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.741152 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.743465 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.743519 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.743538 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.743546 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.743552 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.743558 2249882 round_trippers.go:580]     Audit-Id: f408aef3-fbd9-4aeb-8e31-d16676b1c186
	I1002 10:58:24.743564 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.743570 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.743748 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:24.744187 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:25.237854 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:25.237878 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.237888 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.237899 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.240525 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.240600 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.240624 2249882 round_trippers.go:580]     Audit-Id: 0704ed7a-2119-4c4b-a4c7-2769eba29398
	I1002 10:58:25.240637 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.240659 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.240674 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.240681 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.240690 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.240877 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:25.241436 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:25.241453 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.241462 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.241470 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.243654 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.243670 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.243677 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.243684 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.243690 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.243696 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.243703 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.243708 2249882 round_trippers.go:580]     Audit-Id: 0483effd-0be5-49d8-80b6-cf238b32fe6c
	I1002 10:58:25.243807 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:25.737394 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:25.737416 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.737426 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.737433 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.740157 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.740183 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.740194 2249882 round_trippers.go:580]     Audit-Id: abe8146b-0a00-4b71-b3bd-0837de335c06
	I1002 10:58:25.740201 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.740208 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.740214 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.740220 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.740227 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.740516 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:25.741061 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:25.741078 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.741088 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.741095 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.743546 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.743606 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.743628 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.743651 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.743710 2249882 round_trippers.go:580]     Audit-Id: 3128dbda-f7c2-408b-8547-d1f6e25fe687
	I1002 10:58:25.743735 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.743754 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.743776 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.743916 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:26.237459 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:26.237487 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.237505 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.237512 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.243193 2249882 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 10:58:26.243216 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.243224 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.243241 2249882 round_trippers.go:580]     Audit-Id: 13a61fbf-181a-4453-b22e-c29abee89c98
	I1002 10:58:26.243251 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.243263 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.243270 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.243279 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.243511 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:26.244264 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:26.244279 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.244288 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.244297 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.247074 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:26.247096 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.247110 2249882 round_trippers.go:580]     Audit-Id: bac60b7c-845d-4e73-bd4d-d7dbddac34a7
	I1002 10:58:26.247120 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.247126 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.247136 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.247143 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.247157 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.247596 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:26.737176 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:26.737202 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.737212 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.737227 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.740054 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:26.740078 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.740086 2249882 round_trippers.go:580]     Audit-Id: ed46171d-5f41-437d-8732-69c521f932b6
	I1002 10:58:26.740093 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.740099 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.740105 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.740112 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.740125 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.740285 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:26.740805 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:26.740821 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.740829 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.740837 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.743103 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:26.743134 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.743144 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.743151 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.743157 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.743164 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.743173 2249882 round_trippers.go:580]     Audit-Id: 00e0cb97-efbb-4dcb-b602-1f5a28977ce5
	I1002 10:58:26.743184 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.743319 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:27.237728 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:27.237757 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.237767 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.237774 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.240277 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.240304 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.240313 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.240319 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.240327 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.240333 2249882 round_trippers.go:580]     Audit-Id: 0812de73-ef1b-4915-a5aa-f688e1c6532c
	I1002 10:58:27.240339 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.240348 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.240462 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:27.240999 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:27.241015 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.241025 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.241036 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.243159 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.243183 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.243192 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.243203 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.243211 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.243219 2249882 round_trippers.go:580]     Audit-Id: e7c43696-232f-4957-b68f-144e6174e700
	I1002 10:58:27.243228 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.243235 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.243497 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:27.243909 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:27.737055 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:27.737076 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.737086 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.737093 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.739904 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.739980 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.740003 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.740025 2249882 round_trippers.go:580]     Audit-Id: 8aaf0ea5-bbab-47a8-9d80-2d106b57ff76
	I1002 10:58:27.740067 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.740090 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.740111 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.740146 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.740308 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:27.740946 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:27.740966 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.740975 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.740984 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.743443 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.743466 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.743475 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.743481 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.743488 2249882 round_trippers.go:580]     Audit-Id: bcb4bd71-51bc-4c62-a171-444a1729fa7f
	I1002 10:58:27.743494 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.743500 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.743510 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.743636 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.237567 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:28.237592 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.237602 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.237609 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.240487 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.240508 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.240529 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.240536 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.240557 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.240569 2249882 round_trippers.go:580]     Audit-Id: 95069a40-fbfe-4e9f-bdcb-8edb5e4a1173
	I1002 10:58:28.240576 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.240587 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.241056 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1002 10:58:28.241608 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.241626 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.241635 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.241642 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.244124 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.244147 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.244155 2249882 round_trippers.go:580]     Audit-Id: 3f960e4f-ea87-4d90-80ad-f8df7bd8ca68
	I1002 10:58:28.244162 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.244168 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.244174 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.244180 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.244186 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.244446 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.244830 2249882 pod_ready.go:92] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.244850 2249882 pod_ready.go:81] duration metric: took 32.023712844s waiting for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.244861 2249882 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.244918 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:28.244928 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.244936 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.244943 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.247268 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.247299 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.247307 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.247318 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.247327 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.247334 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.247344 2249882 round_trippers.go:580]     Audit-Id: 8e1df7ce-bbf6-433a-b698-ff3400f11347
	I1002 10:58:28.247350 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.247484 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"780","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1002 10:58:28.247947 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.247970 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.247978 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.247985 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.250100 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.250123 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.250131 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.250138 2249882 round_trippers.go:580]     Audit-Id: eadcd4d8-b5a0-4d3b-a9c4-553512d15155
	I1002 10:58:28.250144 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.250151 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.250164 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.250173 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.250376 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.250729 2249882 pod_ready.go:92] pod "etcd-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.250743 2249882 pod_ready.go:81] duration metric: took 5.875193ms waiting for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.250765 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.250824 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:28.250832 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.250840 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.250847 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.256794 2249882 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 10:58:28.256816 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.256825 2249882 round_trippers.go:580]     Audit-Id: 3d821cb9-2a36-4b80-9d75-8d0c3983777f
	I1002 10:58:28.256832 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.256838 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.256847 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.256856 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.256874 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.257524 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899833","namespace":"kube-system","uid":"fb05b79f-58ee-4097-aa20-b9721f21d29c","resourceVersion":"785","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.mirror":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.seen":"2023-10-02T10:54:43.504350548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8445 chars]
	I1002 10:58:28.258150 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.258168 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.258177 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.258188 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.262752 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:28.262778 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.262788 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.262802 2249882 round_trippers.go:580]     Audit-Id: 57f0bec4-3d3a-4c46-932d-d7ddc317eee8
	I1002 10:58:28.262817 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.262824 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.262837 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.262846 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.263254 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.263693 2249882 pod_ready.go:92] pod "kube-apiserver-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.263710 2249882 pod_ready.go:81] duration metric: took 12.932164ms waiting for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.263722 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.263790 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:28.263806 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.263815 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.263823 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.266053 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.266115 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.266156 2249882 round_trippers.go:580]     Audit-Id: 15bab521-385b-4d25-a95f-a54f951ca4f1
	I1002 10:58:28.266167 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.266175 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.266181 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.266187 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.266194 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.266579 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899833","namespace":"kube-system","uid":"92b1c97d-b38b-405b-9e51-272591b87dcf","resourceVersion":"798","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.mirror":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.seen":"2023-10-02T10:54:43.504351845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8018 chars]
	I1002 10:58:28.267106 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.267122 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.267131 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.267139 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.269746 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.269767 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.269776 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.269782 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.269788 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.269795 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.269804 2249882 round_trippers.go:580]     Audit-Id: ec65ca76-d4ea-4121-8121-a02debbd92b4
	I1002 10:58:28.269810 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.270769 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.271147 2249882 pod_ready.go:92] pod "kube-controller-manager-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.271165 2249882 pod_ready.go:81] duration metric: took 7.435784ms waiting for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.271180 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.271240 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:28.271251 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.271259 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.271272 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.273551 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.273572 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.273580 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.273587 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.273593 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.273600 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.273606 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.273615 2249882 round_trippers.go:580]     Audit-Id: 3939e445-8ced-4a73-941e-bf6123a1fe9b
	I1002 10:58:28.274091 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"473","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I1002 10:58:28.274551 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:28.274567 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.274577 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.274583 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.276861 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.276879 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.276888 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.276894 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.276900 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.276906 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.276915 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.276921 2249882 round_trippers.go:580]     Audit-Id: aa3fc13f-d81c-40ab-ae1f-389edce9bb9d
	I1002 10:58:28.277440 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"fae5cedd-05b9-4641-a9c0-540d8cb0740c","resourceVersion":"540","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4461 chars]
	I1002 10:58:28.277774 2249882 pod_ready.go:92] pod "kube-proxy-76wth" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.277791 2249882 pod_ready.go:81] duration metric: took 6.604846ms waiting for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.277803 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.438152 2249882 request.go:629] Waited for 160.280776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:28.438231 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:28.438245 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.438254 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.438262 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.440810 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.440830 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.440838 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.440844 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.440850 2249882 round_trippers.go:580]     Audit-Id: d1508304-8169-4442-9182-189cb92c322c
	I1002 10:58:28.440856 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.440867 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.440873 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.440990 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fjcp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"2d159cb7-69ca-4b3c-b918-b698bb157220","resourceVersion":"712","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1002 10:58:28.637767 2249882 request.go:629] Waited for 196.258396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.637849 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.637860 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.637869 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.637876 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.640486 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.640511 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.640520 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.640526 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.640533 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.640539 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.640545 2249882 round_trippers.go:580]     Audit-Id: f4f451e7-4070-4038-a469-4f4599fc41bf
	I1002 10:58:28.640564 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.640674 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.641079 2249882 pod_ready.go:92] pod "kube-proxy-fjcp8" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.641095 2249882 pod_ready.go:81] duration metric: took 363.279189ms waiting for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.641106 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.838456 2249882 request.go:629] Waited for 197.267719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:28.838515 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:28.838520 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.838535 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.838543 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.841182 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.841219 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.841230 2249882 round_trippers.go:580]     Audit-Id: 254f8faa-7eb5-4400-90bd-f1405d24be44
	I1002 10:58:28.841236 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.841242 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.841248 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.841282 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.841289 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.841411 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xnhqd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd","resourceVersion":"688","creationTimestamp":"2023-10-02T10:56:32Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1002 10:58:29.038276 2249882 request.go:629] Waited for 196.337017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:29.038393 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:29.038406 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.038416 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.038424 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.041039 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.041116 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.041139 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.041203 2249882 round_trippers.go:580]     Audit-Id: 2ffe88df-baeb-40bd-b4bd-4756157aa1cc
	I1002 10:58:29.041228 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.041274 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.041296 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.041308 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.041406 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m03","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d","resourceVersion":"670","creationTimestamp":"2023-10-02T10:56:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4075 chars]
	I1002 10:58:29.041778 2249882 pod_ready.go:92] pod "kube-proxy-xnhqd" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:29.041795 2249882 pod_ready.go:81] duration metric: took 400.683057ms waiting for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:29.041806 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:29.238242 2249882 request.go:629] Waited for 196.362058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:29.238347 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:29.238362 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.238372 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.238383 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.241126 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.241203 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.241221 2249882 round_trippers.go:580]     Audit-Id: f542fb8e-c0ab-4861-abbf-e494b31795af
	I1002 10:58:29.241233 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.241240 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.241271 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.241303 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.241314 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.241448 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899833","namespace":"kube-system","uid":"65999631-952f-42f1-ae73-f32996dc19fb","resourceVersion":"797","creationTimestamp":"2023-10-02T10:54:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.mirror":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.seen":"2023-10-02T10:54:35.990546729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1002 10:58:29.438293 2249882 request.go:629] Waited for 196.332389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.438378 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.438390 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.438399 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.438406 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.440885 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.440909 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.440928 2249882 round_trippers.go:580]     Audit-Id: b8d8b6fa-9dec-49f5-88be-12bbe6151e94
	I1002 10:58:29.440935 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.440941 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.440947 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.440958 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.440968 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.441084 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:29.441504 2249882 pod_ready.go:92] pod "kube-scheduler-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:29.441522 2249882 pod_ready.go:81] duration metric: took 399.708925ms waiting for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:29.441539 2249882 pod_ready.go:38] duration metric: took 33.232289405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:29.441560 2249882 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 10:58:29.451298 2249882 command_runner.go:130] > -16
	I1002 10:58:29.451368 2249882 ops.go:34] apiserver oom_adj: -16
	I1002 10:58:29.451379 2249882 kubeadm.go:640] restartCluster took 54.223726706s
	I1002 10:58:29.451389 2249882 kubeadm.go:406] StartCluster complete in 54.256070409s
	I1002 10:58:29.451405 2249882 settings.go:142] acquiring lock: {Name:mk7b49767935c15b5f90083e95558323a1cf0ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:29.451479 2249882 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:29.452136 2249882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/kubeconfig: {Name:mk62f5c672074becc8cade8f73c1bedcd1d2907c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:29.452351 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 10:58:29.452618 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:29.452655 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:29.452765 2249882 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 10:58:29.457001 2249882 out.go:177] * Enabled addons: 
	I1002 10:58:29.452887 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:29.458728 2249882 addons.go:502] enable addons completed in 5.954093ms: enabled=[]
	I1002 10:58:29.459057 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 10:58:29.459067 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.459076 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.459083 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.461928 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.461946 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.461955 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.461961 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.461972 2249882 round_trippers.go:580]     Content-Length: 291
	I1002 10:58:29.461980 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.461987 2249882 round_trippers.go:580]     Audit-Id: 8228a7b4-3c8b-4231-8f38-e79ce7f7a709
	I1002 10:58:29.461992 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.461998 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.462243 2249882 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b08b27fb-9d04-4b90-bfa5-b624291dfc83","resourceVersion":"813","creationTimestamp":"2023-10-02T10:54:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 10:58:29.462415 2249882 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899833" context rescaled to 1 replicas
	I1002 10:58:29.462447 2249882 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 10:58:29.464385 2249882 out.go:177] * Verifying Kubernetes components...
	I1002 10:58:29.466141 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:29.568672 2249882 command_runner.go:130] > apiVersion: v1
	I1002 10:58:29.568694 2249882 command_runner.go:130] > data:
	I1002 10:58:29.568700 2249882 command_runner.go:130] >   Corefile: |
	I1002 10:58:29.568705 2249882 command_runner.go:130] >     .:53 {
	I1002 10:58:29.568710 2249882 command_runner.go:130] >         log
	I1002 10:58:29.568716 2249882 command_runner.go:130] >         errors
	I1002 10:58:29.568721 2249882 command_runner.go:130] >         health {
	I1002 10:58:29.568726 2249882 command_runner.go:130] >            lameduck 5s
	I1002 10:58:29.568731 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568737 2249882 command_runner.go:130] >         ready
	I1002 10:58:29.568749 2249882 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 10:58:29.568754 2249882 command_runner.go:130] >            pods insecure
	I1002 10:58:29.568764 2249882 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 10:58:29.568769 2249882 command_runner.go:130] >            ttl 30
	I1002 10:58:29.568776 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568782 2249882 command_runner.go:130] >         prometheus :9153
	I1002 10:58:29.568790 2249882 command_runner.go:130] >         hosts {
	I1002 10:58:29.568796 2249882 command_runner.go:130] >            192.168.58.1 host.minikube.internal
	I1002 10:58:29.568801 2249882 command_runner.go:130] >            fallthrough
	I1002 10:58:29.568812 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568822 2249882 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 10:58:29.568827 2249882 command_runner.go:130] >            max_concurrent 1000
	I1002 10:58:29.568832 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568837 2249882 command_runner.go:130] >         cache 30
	I1002 10:58:29.568846 2249882 command_runner.go:130] >         loop
	I1002 10:58:29.568851 2249882 command_runner.go:130] >         reload
	I1002 10:58:29.568857 2249882 command_runner.go:130] >         loadbalance
	I1002 10:58:29.568864 2249882 command_runner.go:130] >     }
	I1002 10:58:29.568869 2249882 command_runner.go:130] > kind: ConfigMap
	I1002 10:58:29.568876 2249882 command_runner.go:130] > metadata:
	I1002 10:58:29.568882 2249882 command_runner.go:130] >   creationTimestamp: "2023-10-02T10:54:43Z"
	I1002 10:58:29.568887 2249882 command_runner.go:130] >   name: coredns
	I1002 10:58:29.568892 2249882 command_runner.go:130] >   namespace: kube-system
	I1002 10:58:29.568900 2249882 command_runner.go:130] >   resourceVersion: "370"
	I1002 10:58:29.568906 2249882 command_runner.go:130] >   uid: fc76aacb-6ec1-4746-ae20-712369e5fc29
	I1002 10:58:29.568941 2249882 node_ready.go:35] waiting up to 6m0s for node "multinode-899833" to be "Ready" ...
	I1002 10:58:29.569069 2249882 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 10:58:29.638205 2249882 request.go:629] Waited for 69.1805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.638264 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.638274 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.638286 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.638295 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.640716 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.640742 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.640752 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.640758 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.640765 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.640772 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.640778 2249882 round_trippers.go:580]     Audit-Id: 745e361b-1230-4e00-9087-1448ad59a473
	I1002 10:58:29.640785 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.640886 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:29.641297 2249882 node_ready.go:49] node "multinode-899833" has status "Ready":"True"
	I1002 10:58:29.641316 2249882 node_ready.go:38] duration metric: took 72.359043ms waiting for node "multinode-899833" to be "Ready" ...
	I1002 10:58:29.641330 2249882 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:29.837639 2249882 request.go:629] Waited for 196.230769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:29.837706 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:29.837717 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.837728 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.837738 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.841900 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:29.841988 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.842006 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.842014 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.842021 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.842027 2249882 round_trippers.go:580]     Audit-Id: c06c650d-bc3c-4e3a-8731-e6a7b19eddf7
	I1002 10:58:29.842052 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.842065 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.842634 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84297 chars]
	I1002 10:58:29.846700 2249882 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.038197 2249882 request.go:629] Waited for 191.398566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:30.038299 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:30.038313 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.038323 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.038335 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.041562 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:30.041606 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.041616 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.041624 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.041631 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.041638 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.041648 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.041655 2249882 round_trippers.go:580]     Audit-Id: dff88e0d-85eb-4e63-b7ef-f6a45d733c6d
	I1002 10:58:30.042234 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1002 10:58:30.238297 2249882 request.go:629] Waited for 195.328678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.238358 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.238369 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.238378 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.238389 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.242003 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:30.242129 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.242171 2249882 round_trippers.go:580]     Audit-Id: 617575c8-8338-4ce6-ab0d-6a7d40df04bf
	I1002 10:58:30.242194 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.242221 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.242255 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.242283 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.242308 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.242471 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:30.242902 2249882 pod_ready.go:92] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:30.242943 2249882 pod_ready.go:81] duration metric: took 396.209374ms waiting for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.242970 2249882 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.438405 2249882 request.go:629] Waited for 195.342191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:30.438523 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:30.438536 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.438546 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.438553 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.441125 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:30.441184 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.441206 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.441228 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.441281 2249882 round_trippers.go:580]     Audit-Id: 44415cd9-4ead-441f-99ac-3073cbec494f
	I1002 10:58:30.441306 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.441327 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.441347 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.441486 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"780","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1002 10:58:30.638152 2249882 request.go:629] Waited for 196.164349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.638272 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.638285 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.638296 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.638304 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.640944 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:30.641008 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.641023 2249882 round_trippers.go:580]     Audit-Id: 22991863-4229-470e-a5cd-caf23ee26076
	I1002 10:58:30.641030 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.641037 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.641043 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.641050 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.641080 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.641225 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:30.641634 2249882 pod_ready.go:92] pod "etcd-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:30.641652 2249882 pod_ready.go:81] duration metric: took 398.663408ms waiting for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.641673 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.838052 2249882 request.go:629] Waited for 196.306921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:30.838112 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:30.838121 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.838131 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.838142 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.840708 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:30.840775 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.840813 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.840846 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.840867 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.840895 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.840904 2249882 round_trippers.go:580]     Audit-Id: 30047b8f-b47f-4213-982e-0a55e403a1b8
	I1002 10:58:30.840910 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.841057 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899833","namespace":"kube-system","uid":"fb05b79f-58ee-4097-aa20-b9721f21d29c","resourceVersion":"785","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.mirror":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.seen":"2023-10-02T10:54:43.504350548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8445 chars]
	I1002 10:58:31.037983 2249882 request.go:629] Waited for 196.342506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.038050 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.038059 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.038068 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.038076 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.041027 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.041054 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.041063 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.041076 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.041084 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.041090 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.041096 2249882 round_trippers.go:580]     Audit-Id: 1da090fa-b5fb-4c51-a46a-2160d673ff1a
	I1002 10:58:31.041163 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.041284 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:31.041680 2249882 pod_ready.go:92] pod "kube-apiserver-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:31.041697 2249882 pod_ready.go:81] duration metric: took 400.01371ms waiting for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.041710 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.238090 2249882 request.go:629] Waited for 196.313361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:31.238151 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:31.238160 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.238169 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.238176 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.241056 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.241082 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.241091 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.241105 2249882 round_trippers.go:580]     Audit-Id: 896448cd-7fc4-4ae3-a0c5-a22018374a28
	I1002 10:58:31.241112 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.241119 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.241125 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.241136 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.241299 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899833","namespace":"kube-system","uid":"92b1c97d-b38b-405b-9e51-272591b87dcf","resourceVersion":"798","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.mirror":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.seen":"2023-10-02T10:54:43.504351845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8018 chars]
	I1002 10:58:31.438209 2249882 request.go:629] Waited for 196.357923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.438294 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.438304 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.438314 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.438321 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.440879 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.440906 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.440914 2249882 round_trippers.go:580]     Audit-Id: 65ee20dc-99f2-4007-885f-57423986538e
	I1002 10:58:31.440921 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.440927 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.440934 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.440940 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.440947 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.441045 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:31.441443 2249882 pod_ready.go:92] pod "kube-controller-manager-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:31.441462 2249882 pod_ready.go:81] duration metric: took 399.744657ms waiting for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.441474 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.637868 2249882 request.go:629] Waited for 196.328795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:31.637930 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:31.637936 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.637948 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.637956 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.640424 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.640447 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.640456 2249882 round_trippers.go:580]     Audit-Id: e9aad646-7c6f-4900-925a-e992ec03f67a
	I1002 10:58:31.640462 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.640468 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.640474 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.640480 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.640486 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.640924 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"473","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I1002 10:58:31.837832 2249882 request.go:629] Waited for 196.383244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:31.837897 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:31.837906 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.837915 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.837931 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.840325 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.840349 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.840358 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.840365 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.840371 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.840379 2249882 round_trippers.go:580]     Audit-Id: c810c7d8-2b6f-47b6-9e64-a802306c1ce0
	I1002 10:58:31.840386 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.840392 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.840648 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"fae5cedd-05b9-4641-a9c0-540d8cb0740c","resourceVersion":"540","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4461 chars]
	I1002 10:58:31.840996 2249882 pod_ready.go:92] pod "kube-proxy-76wth" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:31.841013 2249882 pod_ready.go:81] duration metric: took 399.528905ms waiting for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.841025 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.038425 2249882 request.go:629] Waited for 197.33235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:32.038503 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:32.038513 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.038522 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.038609 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.041473 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.041498 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.041514 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.041530 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.041537 2249882 round_trippers.go:580]     Audit-Id: 44930dcc-e3c3-4dd3-8f89-d5946c519efe
	I1002 10:58:32.041543 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.041549 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.041555 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.041688 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fjcp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"2d159cb7-69ca-4b3c-b918-b698bb157220","resourceVersion":"712","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1002 10:58:32.238448 2249882 request.go:629] Waited for 196.172579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:32.238531 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:32.238558 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.238573 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.238589 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.241126 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.241151 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.241160 2249882 round_trippers.go:580]     Audit-Id: a4daacb1-0bcd-4a68-b209-f7f065e88735
	I1002 10:58:32.241167 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.241173 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.241180 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.241186 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.241197 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.241409 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:32.241810 2249882 pod_ready.go:92] pod "kube-proxy-fjcp8" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:32.241826 2249882 pod_ready.go:81] duration metric: took 400.790018ms waiting for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.241839 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.438213 2249882 request.go:629] Waited for 196.309751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:32.438274 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:32.438285 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.438294 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.438305 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.440904 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.440960 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.440983 2249882 round_trippers.go:580]     Audit-Id: d527e2de-d1c6-4a30-8de7-f91c7cbc3fac
	I1002 10:58:32.441007 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.441043 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.441056 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.441063 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.441069 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.441182 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xnhqd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd","resourceVersion":"688","creationTimestamp":"2023-10-02T10:56:32Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1002 10:58:32.637863 2249882 request.go:629] Waited for 196.163036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:32.638022 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:32.638060 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.638084 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.638105 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.640544 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.640569 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.640579 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.640585 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.640595 2249882 round_trippers.go:580]     Audit-Id: 4e5a58c7-867a-44bd-991f-a276fb38f73f
	I1002 10:58:32.640605 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.640612 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.640622 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.641039 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m03","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d","resourceVersion":"670","creationTimestamp":"2023-10-02T10:56:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4075 chars]
	I1002 10:58:32.641420 2249882 pod_ready.go:92] pod "kube-proxy-xnhqd" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:32.641442 2249882 pod_ready.go:81] duration metric: took 399.594282ms waiting for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.641453 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.837740 2249882 request.go:629] Waited for 196.22116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:32.837806 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:32.837816 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.837833 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.837842 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.840419 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.840446 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.840455 2249882 round_trippers.go:580]     Audit-Id: 560e6793-2c2c-479c-9175-e7ef31537652
	I1002 10:58:32.840462 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.840469 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.840475 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.840481 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.840488 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.840860 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899833","namespace":"kube-system","uid":"65999631-952f-42f1-ae73-f32996dc19fb","resourceVersion":"797","creationTimestamp":"2023-10-02T10:54:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.mirror":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.seen":"2023-10-02T10:54:35.990546729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1002 10:58:33.037641 2249882 request.go:629] Waited for 196.294777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:33.037722 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:33.037729 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.037738 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.037750 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.040455 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:33.040478 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.040487 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.040494 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.040500 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.040507 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.040513 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.040519 2249882 round_trippers.go:580]     Audit-Id: 156e629f-c6e5-4da2-87c3-221cfa28955c
	I1002 10:58:33.040642 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:33.041041 2249882 pod_ready.go:92] pod "kube-scheduler-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:33.041053 2249882 pod_ready.go:81] duration metric: took 399.589623ms waiting for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:33.041066 2249882 pod_ready.go:38] duration metric: took 3.399721193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:33.041095 2249882 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:58:33.041159 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:58:33.054404 2249882 command_runner.go:130] > 1961
	I1002 10:58:33.056882 2249882 api_server.go:72] duration metric: took 3.594404466s to wait for apiserver process to appear ...
	I1002 10:58:33.056907 2249882 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:58:33.056924 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:58:33.066203 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 10:58:33.066284 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1002 10:58:33.066295 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.066304 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.066313 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.067511 2249882 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 10:58:33.067538 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.067547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.067553 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.067560 2249882 round_trippers.go:580]     Content-Length: 263
	I1002 10:58:33.067566 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.067572 2249882 round_trippers.go:580]     Audit-Id: 1c32f57a-d0f7-47d9-a86b-ebad71fee90d
	I1002 10:58:33.067581 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.067588 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.067608 2249882 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1002 10:58:33.067654 2249882 api_server.go:141] control plane version: v1.28.2
	I1002 10:58:33.067669 2249882 api_server.go:131] duration metric: took 10.755757ms to wait for apiserver health ...
	I1002 10:58:33.067678 2249882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:58:33.238043 2249882 request.go:629] Waited for 170.295861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.238105 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.238115 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.238124 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.238137 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.242065 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:33.242092 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.242102 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.242114 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.242120 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.242128 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.242134 2249882 round_trippers.go:580]     Audit-Id: 5073c0e8-4c26-49db-9be5-0064777ff6e9
	I1002 10:58:33.242143 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.243115 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84297 chars]
	I1002 10:58:33.246620 2249882 system_pods.go:59] 12 kube-system pods found
	I1002 10:58:33.246650 2249882 system_pods.go:61] "coredns-5dd5756b68-s5pf5" [f72cd720-6739-45d2-a014-97b1e19d2574] Running
	I1002 10:58:33.246657 2249882 system_pods.go:61] "etcd-multinode-899833" [50fafe88-1106-4021-9c0c-7bb9d9d17ffb] Running
	I1002 10:58:33.246662 2249882 system_pods.go:61] "kindnet-jbhdj" [82532e9c-9f56-44a1-a627-ec7462b9738f] Running
	I1002 10:58:33.246667 2249882 system_pods.go:61] "kindnet-kp6fb" [260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03] Running
	I1002 10:58:33.246673 2249882 system_pods.go:61] "kindnet-lmfm5" [8790fa37-873d-4ec3-a9b3-020dcc4a8e1d] Running
	I1002 10:58:33.246678 2249882 system_pods.go:61] "kube-apiserver-multinode-899833" [fb05b79f-58ee-4097-aa20-b9721f21d29c] Running
	I1002 10:58:33.246684 2249882 system_pods.go:61] "kube-controller-manager-multinode-899833" [92b1c97d-b38b-405b-9e51-272591b87dcf] Running
	I1002 10:58:33.246689 2249882 system_pods.go:61] "kube-proxy-76wth" [675afe15-d632-48d5-8e1e-af889d799786] Running
	I1002 10:58:33.246693 2249882 system_pods.go:61] "kube-proxy-fjcp8" [2d159cb7-69ca-4b3c-b918-b698bb157220] Running
	I1002 10:58:33.246699 2249882 system_pods.go:61] "kube-proxy-xnhqd" [1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd] Running
	I1002 10:58:33.246707 2249882 system_pods.go:61] "kube-scheduler-multinode-899833" [65999631-952f-42f1-ae73-f32996dc19fb] Running
	I1002 10:58:33.246717 2249882 system_pods.go:61] "storage-provisioner" [97d5bb7f-502d-4838-a926-c613783c1588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:58:33.246729 2249882 system_pods.go:74] duration metric: took 179.04189ms to wait for pod list to return data ...
	I1002 10:58:33.246738 2249882 default_sa.go:34] waiting for default service account to be created ...
	I1002 10:58:33.438121 2249882 request.go:629] Waited for 191.305618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 10:58:33.438206 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 10:58:33.438216 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.438225 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.438233 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.440692 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:33.440712 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.440722 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.440728 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.440734 2249882 round_trippers.go:580]     Content-Length: 261
	I1002 10:58:33.440740 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.440747 2249882 round_trippers.go:580]     Audit-Id: d5fbde01-fce0-4656-8930-2bca6e4e2e53
	I1002 10:58:33.440753 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.440759 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.440805 2249882 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a059ba47-c3c4-4536-aa1c-a44f18908aeb","resourceVersion":"307","creationTimestamp":"2023-10-02T10:54:55Z"}}]}
	I1002 10:58:33.440978 2249882 default_sa.go:45] found service account: "default"
	I1002 10:58:33.440995 2249882 default_sa.go:55] duration metric: took 194.246254ms for default service account to be created ...
	I1002 10:58:33.441004 2249882 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 10:58:33.638416 2249882 request.go:629] Waited for 197.330315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.638496 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.638508 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.638517 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.638525 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.642309 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:33.642335 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.642344 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.642351 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.642357 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.642368 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.642381 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.642393 2249882 round_trippers.go:580]     Audit-Id: 1481a1b3-a0ff-4b20-8a75-93f75cd25398
	I1002 10:58:33.643380 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84297 chars]
	I1002 10:58:33.646923 2249882 system_pods.go:86] 12 kube-system pods found
	I1002 10:58:33.646949 2249882 system_pods.go:89] "coredns-5dd5756b68-s5pf5" [f72cd720-6739-45d2-a014-97b1e19d2574] Running
	I1002 10:58:33.646956 2249882 system_pods.go:89] "etcd-multinode-899833" [50fafe88-1106-4021-9c0c-7bb9d9d17ffb] Running
	I1002 10:58:33.646962 2249882 system_pods.go:89] "kindnet-jbhdj" [82532e9c-9f56-44a1-a627-ec7462b9738f] Running
	I1002 10:58:33.646967 2249882 system_pods.go:89] "kindnet-kp6fb" [260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03] Running
	I1002 10:58:33.646972 2249882 system_pods.go:89] "kindnet-lmfm5" [8790fa37-873d-4ec3-a9b3-020dcc4a8e1d] Running
	I1002 10:58:33.646982 2249882 system_pods.go:89] "kube-apiserver-multinode-899833" [fb05b79f-58ee-4097-aa20-b9721f21d29c] Running
	I1002 10:58:33.646993 2249882 system_pods.go:89] "kube-controller-manager-multinode-899833" [92b1c97d-b38b-405b-9e51-272591b87dcf] Running
	I1002 10:58:33.646998 2249882 system_pods.go:89] "kube-proxy-76wth" [675afe15-d632-48d5-8e1e-af889d799786] Running
	I1002 10:58:33.647005 2249882 system_pods.go:89] "kube-proxy-fjcp8" [2d159cb7-69ca-4b3c-b918-b698bb157220] Running
	I1002 10:58:33.647011 2249882 system_pods.go:89] "kube-proxy-xnhqd" [1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd] Running
	I1002 10:58:33.647022 2249882 system_pods.go:89] "kube-scheduler-multinode-899833" [65999631-952f-42f1-ae73-f32996dc19fb] Running
	I1002 10:58:33.647030 2249882 system_pods.go:89] "storage-provisioner" [97d5bb7f-502d-4838-a926-c613783c1588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:58:33.647041 2249882 system_pods.go:126] duration metric: took 206.030716ms to wait for k8s-apps to be running ...
	I1002 10:58:33.647048 2249882 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:58:33.647109 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:33.660356 2249882 system_svc.go:56] duration metric: took 13.295428ms WaitForService to wait for kubelet.
	I1002 10:58:33.660380 2249882 kubeadm.go:581] duration metric: took 4.197911011s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:58:33.660434 2249882 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:58:33.837780 2249882 request.go:629] Waited for 177.246773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:33.837850 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:33.837860 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.837869 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.837879 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.840809 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:33.840835 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.840844 2249882 round_trippers.go:580]     Audit-Id: f64907f6-5559-497c-8993-f409e00e0a68
	I1002 10:58:33.840850 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.840856 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.840863 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.840869 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.840875 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.841074 2249882 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15863 chars]
	I1002 10:58:33.841901 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:33.841927 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:33.841938 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:33.841943 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:33.841948 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:33.841953 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:33.841957 2249882 node_conditions.go:105] duration metric: took 181.511685ms to run NodePressure ...
	I1002 10:58:33.841970 2249882 start.go:228] waiting for startup goroutines ...
	I1002 10:58:33.841977 2249882 start.go:233] waiting for cluster config update ...
	I1002 10:58:33.841984 2249882 start.go:242] writing updated cluster config ...
	I1002 10:58:33.842469 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:33.842576 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:33.846452 2249882 out.go:177] * Starting worker node multinode-899833-m02 in cluster multinode-899833
	I1002 10:58:33.848213 2249882 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:58:33.850041 2249882 out.go:177] * Pulling base image ...
	I1002 10:58:33.852150 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:58:33.852180 2249882 cache.go:57] Caching tarball of preloaded images
	I1002 10:58:33.852214 2249882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:58:33.852297 2249882 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:58:33.852310 2249882 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:58:33.852469 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:33.876215 2249882 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:58:33.876238 2249882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:58:33.876257 2249882 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:58:33.876285 2249882 start.go:365] acquiring machines lock for multinode-899833-m02: {Name:mkf7f969bdbd1303c4e28422c1c64792eb1255fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:58:33.876343 2249882 start.go:369] acquired machines lock for "multinode-899833-m02" in 40.632µs
	I1002 10:58:33.876362 2249882 start.go:96] Skipping create...Using existing machine configuration
	I1002 10:58:33.876368 2249882 fix.go:54] fixHost starting: m02
	I1002 10:58:33.876645 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m02 --format={{.State.Status}}
	I1002 10:58:33.901208 2249882 fix.go:102] recreateIfNeeded on multinode-899833-m02: state=Stopped err=<nil>
	W1002 10:58:33.901230 2249882 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 10:58:33.903659 2249882 out.go:177] * Restarting existing docker container for "multinode-899833-m02" ...
	I1002 10:58:33.905501 2249882 cli_runner.go:164] Run: docker start multinode-899833-m02
	I1002 10:58:34.267235 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m02 --format={{.State.Status}}
	I1002 10:58:34.297957 2249882 kic.go:426] container "multinode-899833-m02" state is running.
	I1002 10:58:34.298319 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:58:34.327025 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:34.327266 2249882 machine.go:88] provisioning docker machine ...
	I1002 10:58:34.327285 2249882 ubuntu.go:169] provisioning hostname "multinode-899833-m02"
	I1002 10:58:34.327336 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:34.349404 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:34.350010 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:34.350028 2249882 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899833-m02 && echo "multinode-899833-m02" | sudo tee /etc/hostname
	I1002 10:58:34.350778 2249882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 10:58:37.504175 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899833-m02
	
	I1002 10:58:37.504255 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:37.522926 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:37.523327 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:37.523351 2249882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899833-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899833-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899833-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:58:37.662495 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:58:37.662534 2249882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:58:37.662550 2249882 ubuntu.go:177] setting up certificates
	I1002 10:58:37.662561 2249882 provision.go:83] configureAuth start
	I1002 10:58:37.662631 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:58:37.682238 2249882 provision.go:138] copyHostCerts
	I1002 10:58:37.682280 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:37.682310 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:58:37.682323 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:37.682446 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:58:37.682552 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:37.682577 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:58:37.682586 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:37.682617 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:58:37.682669 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:37.682690 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:58:37.682697 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:37.682723 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:58:37.682775 2249882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.multinode-899833-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-899833-m02]
	I1002 10:58:37.985542 2249882 provision.go:172] copyRemoteCerts
	I1002 10:58:37.985610 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:58:37.985660 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.007200 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:38.108779 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:58:38.108842 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:58:38.139812 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:58:38.139930 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 10:58:38.177597 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:58:38.177658 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:58:38.208484 2249882 provision.go:86] duration metric: configureAuth took 545.903844ms
	I1002 10:58:38.208512 2249882 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:58:38.208765 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:38.208826 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.226672 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:38.227085 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:38.227097 2249882 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:58:38.374948 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:58:38.375017 2249882 ubuntu.go:71] root file system type: overlay
	I1002 10:58:38.375163 2249882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:58:38.375239 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.394214 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:38.394636 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:38.394718 2249882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:58:38.551594 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:58:38.551687 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.577216 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:38.577652 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:38.577679 2249882 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:58:38.728314 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:58:38.728336 2249882 machine.go:91] provisioned docker machine in 4.401055708s
	I1002 10:58:38.728346 2249882 start.go:300] post-start starting for "multinode-899833-m02" (driver="docker")
	I1002 10:58:38.728357 2249882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:58:38.728421 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:58:38.728460 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.747308 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:38.848212 2249882 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:58:38.852528 2249882 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 10:58:38.852547 2249882 command_runner.go:130] > NAME="Ubuntu"
	I1002 10:58:38.852554 2249882 command_runner.go:130] > VERSION_ID="22.04"
	I1002 10:58:38.852561 2249882 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 10:58:38.852567 2249882 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 10:58:38.852574 2249882 command_runner.go:130] > ID=ubuntu
	I1002 10:58:38.852579 2249882 command_runner.go:130] > ID_LIKE=debian
	I1002 10:58:38.852585 2249882 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 10:58:38.852591 2249882 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 10:58:38.852598 2249882 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 10:58:38.852606 2249882 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 10:58:38.852614 2249882 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 10:58:38.852664 2249882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:58:38.852698 2249882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:58:38.852711 2249882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:58:38.852720 2249882 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:58:38.852732 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:58:38.852797 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:58:38.852877 2249882 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:58:38.852887 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:58:38.853000 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:58:38.863724 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:38.892327 2249882 start.go:303] post-start completed in 163.964169ms
	I1002 10:58:38.892415 2249882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:58:38.892463 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.910228 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:39.004975 2249882 command_runner.go:130] > 12%
	I1002 10:58:39.005070 2249882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:58:39.011792 2249882 command_runner.go:130] > 173G
	I1002 10:58:39.011831 2249882 fix.go:56] fixHost completed within 5.135460901s
	I1002 10:58:39.011860 2249882 start.go:83] releasing machines lock for "multinode-899833-m02", held for 5.135508219s
	I1002 10:58:39.011949 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:58:39.036027 2249882 out.go:177] * Found network options:
	I1002 10:58:39.037973 2249882 out.go:177]   - NO_PROXY=192.168.58.2
	W1002 10:58:39.039923 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:39.039974 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 10:58:39.040067 2249882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:58:39.040128 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:39.040425 2249882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:58:39.040483 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:39.074154 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:39.080502 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:39.170914 2249882 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 10:58:39.170936 2249882 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1002 10:58:39.170945 2249882 command_runner.go:130] > Device: d0h/208d	Inode: 1836145     Links: 1
	I1002 10:58:39.170952 2249882 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:58:39.170959 2249882 command_runner.go:130] > Access: 2023-10-02 10:55:24.702948257 +0000
	I1002 10:58:39.170966 2249882 command_runner.go:130] > Modify: 2023-10-02 10:55:24.550949068 +0000
	I1002 10:58:39.170972 2249882 command_runner.go:130] > Change: 2023-10-02 10:55:24.550949068 +0000
	I1002 10:58:39.170978 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:55:24.550949068 +0000
	I1002 10:58:39.171477 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:58:39.307310 2249882 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 10:58:39.311127 2249882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:58:39.311206 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:58:39.324125 2249882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 10:58:39.324153 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:39.324188 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:39.324280 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:39.344988 2249882 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 10:58:39.346500 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:58:39.358620 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:58:39.370823 2249882 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:58:39.370944 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:58:39.383128 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:39.398232 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:58:39.410713 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:39.423803 2249882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:58:39.435850 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:58:39.450908 2249882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:58:39.461844 2249882 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 10:58:39.463500 2249882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:58:39.473416 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:39.568126 2249882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:58:39.689209 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:39.689318 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:39.689387 2249882 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:58:39.703424 2249882 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1002 10:58:39.704411 2249882 command_runner.go:130] > [Unit]
	I1002 10:58:39.704459 2249882 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 10:58:39.704488 2249882 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 10:58:39.704507 2249882 command_runner.go:130] > BindsTo=containerd.service
	I1002 10:58:39.704536 2249882 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1002 10:58:39.704558 2249882 command_runner.go:130] > Wants=network-online.target
	I1002 10:58:39.704587 2249882 command_runner.go:130] > Requires=docker.socket
	I1002 10:58:39.704611 2249882 command_runner.go:130] > StartLimitBurst=3
	I1002 10:58:39.704664 2249882 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 10:58:39.704682 2249882 command_runner.go:130] > [Service]
	I1002 10:58:39.704705 2249882 command_runner.go:130] > Type=notify
	I1002 10:58:39.704726 2249882 command_runner.go:130] > Restart=on-failure
	I1002 10:58:39.704772 2249882 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1002 10:58:39.704802 2249882 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 10:58:39.704829 2249882 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 10:58:39.704861 2249882 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 10:58:39.704889 2249882 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 10:58:39.704921 2249882 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 10:58:39.704951 2249882 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 10:58:39.704989 2249882 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 10:58:39.705015 2249882 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 10:58:39.705036 2249882 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 10:58:39.705070 2249882 command_runner.go:130] > ExecStart=
	I1002 10:58:39.705111 2249882 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1002 10:58:39.705135 2249882 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 10:58:39.705164 2249882 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 10:58:39.705194 2249882 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 10:58:39.705213 2249882 command_runner.go:130] > LimitNOFILE=infinity
	I1002 10:58:39.705243 2249882 command_runner.go:130] > LimitNPROC=infinity
	I1002 10:58:39.705278 2249882 command_runner.go:130] > LimitCORE=infinity
	I1002 10:58:39.705307 2249882 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 10:58:39.705327 2249882 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 10:58:39.705353 2249882 command_runner.go:130] > TasksMax=infinity
	I1002 10:58:39.705375 2249882 command_runner.go:130] > TimeoutStartSec=0
	I1002 10:58:39.705400 2249882 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 10:58:39.705438 2249882 command_runner.go:130] > Delegate=yes
	I1002 10:58:39.705468 2249882 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 10:58:39.705502 2249882 command_runner.go:130] > KillMode=process
	I1002 10:58:39.705542 2249882 command_runner.go:130] > [Install]
	I1002 10:58:39.705565 2249882 command_runner.go:130] > WantedBy=multi-user.target
	I1002 10:58:39.706646 2249882 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:58:39.706739 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:58:39.725791 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:39.745374 2249882 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 10:58:39.747259 2249882 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:58:39.751497 2249882 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 10:58:39.752216 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:58:39.765095 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:58:39.803541 2249882 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:58:39.927698 2249882 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:58:40.056784 2249882 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:58:40.056827 2249882 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:58:40.094536 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:40.205514 2249882 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:58:40.536851 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:40.646452 2249882 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:58:40.748708 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:40.850979 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:40.947010 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:58:40.979266 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:41.096676 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:58:41.202347 2249882 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:58:41.202470 2249882 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:58:41.207630 2249882 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 10:58:41.207702 2249882 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 10:58:41.207724 2249882 command_runner.go:130] > Device: feh/254d	Inode: 240         Links: 1
	I1002 10:58:41.207749 2249882 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1002 10:58:41.207786 2249882 command_runner.go:130] > Access: 2023-10-02 10:58:41.109893738 +0000
	I1002 10:58:41.207809 2249882 command_runner.go:130] > Modify: 2023-10-02 10:58:41.109893738 +0000
	I1002 10:58:41.207842 2249882 command_runner.go:130] > Change: 2023-10-02 10:58:41.109893738 +0000
	I1002 10:58:41.207867 2249882 command_runner.go:130] >  Birth: -
	I1002 10:58:41.208571 2249882 start.go:537] Will wait 60s for crictl version
	I1002 10:58:41.208665 2249882 ssh_runner.go:195] Run: which crictl
	I1002 10:58:41.214462 2249882 command_runner.go:130] > /usr/bin/crictl
	I1002 10:58:41.215227 2249882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:58:41.272178 2249882 command_runner.go:130] > Version:  0.1.0
	I1002 10:58:41.272489 2249882 command_runner.go:130] > RuntimeName:  docker
	I1002 10:58:41.272734 2249882 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 10:58:41.272966 2249882 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 10:58:41.275644 2249882 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:58:41.275766 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:41.304636 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:41.306632 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:41.338120 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:41.342860 2249882 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:58:41.345120 2249882 out.go:177]   - env NO_PROXY=192.168.58.2
	I1002 10:58:41.347005 2249882 cli_runner.go:164] Run: docker network inspect multinode-899833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:58:41.367378 2249882 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 10:58:41.372466 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:58:41.388337 2249882 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833 for IP: 192.168.58.3
	I1002 10:58:41.388369 2249882 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:41.388512 2249882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:58:41.388552 2249882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:58:41.388562 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:58:41.388575 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:58:41.388587 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:58:41.388599 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:58:41.388655 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:58:41.388685 2249882 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:58:41.388695 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:58:41.388719 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:58:41.388742 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:58:41.388764 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:58:41.388811 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:41.388838 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.388850 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.388861 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.389202 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:58:41.419242 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:58:41.450028 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:58:41.484475 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:58:41.515125 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:58:41.545712 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:58:41.575229 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:58:41.611525 2249882 ssh_runner.go:195] Run: openssl version
	I1002 10:58:41.618526 2249882 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 10:58:41.619381 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:58:41.631766 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.637382 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.637965 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.638033 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.646416 2249882 command_runner.go:130] > 3ec20f2e
	I1002 10:58:41.646886 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:58:41.658930 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:58:41.672626 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.678265 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.678294 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.678352 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.686663 2249882 command_runner.go:130] > b5213941
	I1002 10:58:41.687119 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:58:41.698231 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:58:41.709823 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.714794 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.714826 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.714888 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.723318 2249882 command_runner.go:130] > 51391683
	I1002 10:58:41.723764 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:58:41.734989 2249882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:58:41.741322 2249882 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:58:41.741352 2249882 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:58:41.741430 2249882 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:58:41.806382 2249882 command_runner.go:130] > cgroupfs
	I1002 10:58:41.807749 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:58:41.807800 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:58:41.807823 2249882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:58:41.807853 2249882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899833 NodeName:multinode-899833-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:58:41.808020 2249882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899833-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:58:41.808080 2249882 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-899833-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:58:41.808177 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:58:41.817751 2249882 command_runner.go:130] > kubeadm
	I1002 10:58:41.817811 2249882 command_runner.go:130] > kubectl
	I1002 10:58:41.817823 2249882 command_runner.go:130] > kubelet
	I1002 10:58:41.818955 2249882 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:58:41.819019 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 10:58:41.829397 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1002 10:58:41.856290 2249882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:58:41.877952 2249882 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:58:41.882345 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:58:41.895489 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:58:41.895883 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:41.895829 2249882 start.go:304] JoinCluster: &{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Aut
oPauseInterval:1m0s}
	I1002 10:58:41.895950 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 10:58:41.896017 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:58:41.913968 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:58:42.108764 2249882 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token aiz9b8.o67i19javg1wra1n --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d 
	I1002 10:58:42.108814 2249882 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:42.108852 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:58:42.109158 2249882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1002 10:58:42.109212 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:58:42.132417 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:58:42.300703 2249882 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1002 10:58:42.364419 2249882 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-lmfm5, kube-system/kube-proxy-76wth
	I1002 10:58:45.384586 2249882 command_runner.go:130] > node/multinode-899833-m02 cordoned
	I1002 10:58:45.384613 2249882 command_runner.go:130] > pod "busybox-5bc68d56bd-wzmtg" has DeletionTimestamp older than 1 seconds, skipping
	I1002 10:58:45.384621 2249882 command_runner.go:130] > node/multinode-899833-m02 drained
	I1002 10:58:45.384638 2249882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.275453513s)
	I1002 10:58:45.384650 2249882 node.go:108] successfully drained node "m02"
	I1002 10:58:45.385030 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:45.385315 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:45.385730 2249882 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1002 10:58:45.385780 2249882 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:45.385790 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:45.385799 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:45.385805 2249882 round_trippers.go:473]     Content-Type: application/json
	I1002 10:58:45.385815 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:45.389895 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:45.389917 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:45.389926 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:45.389933 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:45.389939 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:45.389945 2249882 round_trippers.go:580]     Content-Length: 171
	I1002 10:58:45.389951 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:45 GMT
	I1002 10:58:45.389963 2249882 round_trippers.go:580]     Audit-Id: 7c902483-793d-4af9-80fa-b8df7ba38d1d
	I1002 10:58:45.389969 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:45.390257 2249882 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-899833-m02","kind":"nodes","uid":"fae5cedd-05b9-4641-a9c0-540d8cb0740c"}}
	I1002 10:58:45.390295 2249882 node.go:124] successfully deleted node "m02"
	I1002 10:58:45.390303 2249882 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:45.390322 2249882 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:45.390340 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiz9b8.o67i19javg1wra1n --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m02"
	I1002 10:58:45.444997 2249882 command_runner.go:130] ! W1002 10:58:45.444554    1534 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:58:45.445622 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:58:45.506197 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:58:45.600027 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:58:45.600091 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:58:46.400816 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:58:46.400840 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:58:46.400850 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:58:46.400857 2249882 command_runner.go:130] > OS: Linux
	I1002 10:58:46.400864 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:58:46.400887 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:58:46.400900 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:58:46.400906 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:58:46.400919 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:58:46.400926 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:58:46.400938 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:58:46.400945 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:58:46.400954 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:58:46.400961 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:58:46.400971 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 10:58:46.400984 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:58:46.400994 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:58:46.401006 2249882 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 10:58:46.401016 2249882 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 10:58:46.401026 2249882 command_runner.go:130] > This node has joined the cluster:
	I1002 10:58:46.401034 2249882 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 10:58:46.401044 2249882 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 10:58:46.401052 2249882 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 10:58:46.401066 2249882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiz9b8.o67i19javg1wra1n --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m02": (1.010714732s)
	I1002 10:58:46.401086 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 10:58:46.630483 2249882 start.go:306] JoinCluster complete in 4.734646858s
	I1002 10:58:46.630512 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:58:46.630518 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:58:46.630574 2249882 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 10:58:46.635558 2249882 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 10:58:46.635580 2249882 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 10:58:46.635588 2249882 command_runner.go:130] > Device: 36h/54d	Inode: 1826972     Links: 1
	I1002 10:58:46.635596 2249882 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:58:46.635603 2249882 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:58:46.635609 2249882 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:58:46.635615 2249882 command_runner.go:130] > Change: 2023-10-02 10:36:11.204484217 +0000
	I1002 10:58:46.635621 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:36:11.160484379 +0000
	I1002 10:58:46.635673 2249882 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 10:58:46.635688 2249882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 10:58:46.664059 2249882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 10:58:46.927393 2249882 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:58:46.939087 2249882 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:58:46.942282 2249882 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 10:58:46.953445 2249882 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 10:58:46.959074 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:46.959378 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:46.959750 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 10:58:46.959765 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.959774 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.959784 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:46.962338 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:46.962361 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:46.962368 2249882 round_trippers.go:580]     Audit-Id: f4ff8b04-3c09-4bd2-a5f5-c363566ec78f
	I1002 10:58:46.962375 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:46.962381 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:46.962389 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:46.962395 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:46.962402 2249882 round_trippers.go:580]     Content-Length: 291
	I1002 10:58:46.962412 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:46 GMT
	I1002 10:58:46.962435 2249882 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b08b27fb-9d04-4b90-bfa5-b624291dfc83","resourceVersion":"813","creationTimestamp":"2023-10-02T10:54:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 10:58:46.962529 2249882 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899833" context rescaled to 1 replicas
	I1002 10:58:46.962555 2249882 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:46.966030 2249882 out.go:177] * Verifying Kubernetes components...
	I1002 10:58:46.968076 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:46.983691 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:46.984471 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:46.984754 2249882 node_ready.go:35] waiting up to 6m0s for node "multinode-899833-m02" to be "Ready" ...
	I1002 10:58:46.984829 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:46.984846 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.984855 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.984863 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:46.987514 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:46.987572 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:46.987593 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:46.987616 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:46 GMT
	I1002 10:58:46.987652 2249882 round_trippers.go:580]     Audit-Id: 15075f8d-10d2-4d49-9e76-538893f8a9b3
	I1002 10:58:46.987679 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:46.987695 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:46.987701 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:46.987846 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:46.988380 2249882 node_ready.go:49] node "multinode-899833-m02" has status "Ready":"True"
	I1002 10:58:46.988400 2249882 node_ready.go:38] duration metric: took 3.624875ms waiting for node "multinode-899833-m02" to be "Ready" ...
	I1002 10:58:46.988442 2249882 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:46.988514 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:46.988524 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.988533 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.988540 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:46.992913 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:46.992987 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:46.993010 2249882 round_trippers.go:580]     Audit-Id: f7674812-1990-4b23-b5de-2dece07163f4
	I1002 10:58:46.993035 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:46.993072 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:46.993098 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:46.993139 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:46.993164 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:46 GMT
	I1002 10:58:46.993649 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"869"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84334 chars]
	I1002 10:58:46.997385 2249882 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:46.997484 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:46.997492 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.997501 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.997508 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.001241 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:47.001349 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.001372 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.001396 2249882 round_trippers.go:580]     Audit-Id: f2411a28-6f35-4925-9c47-841571754743
	I1002 10:58:47.001432 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.001463 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.001489 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.001512 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.001669 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1002 10:58:47.002321 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.002341 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.002351 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.002359 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.012329 2249882 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1002 10:58:47.012360 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.012368 2249882 round_trippers.go:580]     Audit-Id: d68bc8a6-0530-4de3-9074-57814eb42abe
	I1002 10:58:47.012375 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.012381 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.012387 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.012394 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.012428 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.012583 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.013044 2249882 pod_ready.go:92] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.013067 2249882 pod_ready.go:81] duration metric: took 15.64872ms waiting for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.013120 2249882 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.013219 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:47.013228 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.013236 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.013244 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.015693 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.015712 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.015720 2249882 round_trippers.go:580]     Audit-Id: fd7f3b43-e3ee-4582-b3a6-fa2a49d6b655
	I1002 10:58:47.015727 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.015758 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.015773 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.015780 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.015787 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.016260 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"780","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1002 10:58:47.016781 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.016800 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.016809 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.016829 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.019765 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.019788 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.019796 2249882 round_trippers.go:580]     Audit-Id: 7dbd89ea-8458-41e7-94f8-5c7c45f603bf
	I1002 10:58:47.019803 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.019809 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.019815 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.019841 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.019856 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.020376 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.020870 2249882 pod_ready.go:92] pod "etcd-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.020914 2249882 pod_ready.go:81] duration metric: took 7.777574ms waiting for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.020949 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.021041 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:47.021076 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.021100 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.021123 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.023526 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.023575 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.023613 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.023637 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.023659 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.023694 2249882 round_trippers.go:580]     Audit-Id: 05f070c7-c407-4a06-8b68-d4851ce89a4b
	I1002 10:58:47.023719 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.023740 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.028617 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899833","namespace":"kube-system","uid":"fb05b79f-58ee-4097-aa20-b9721f21d29c","resourceVersion":"785","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.mirror":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.seen":"2023-10-02T10:54:43.504350548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8445 chars]
	I1002 10:58:47.029331 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.029379 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.029405 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.029431 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.040902 2249882 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1002 10:58:47.040974 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.040997 2249882 round_trippers.go:580]     Audit-Id: 25c39b42-f031-493d-9b6a-07a7796d125e
	I1002 10:58:47.041018 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.041054 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.041079 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.041100 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.041135 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.041695 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.042175 2249882 pod_ready.go:92] pod "kube-apiserver-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.042216 2249882 pod_ready.go:81] duration metric: took 21.245694ms waiting for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.042242 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.042339 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:47.042373 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.042394 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.042417 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.052124 2249882 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1002 10:58:47.052201 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.052225 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.052249 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.052284 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.052309 2249882 round_trippers.go:580]     Audit-Id: d3348536-bf24-4719-80f9-c867d42b28a8
	I1002 10:58:47.052332 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.052365 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.053512 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899833","namespace":"kube-system","uid":"92b1c97d-b38b-405b-9e51-272591b87dcf","resourceVersion":"798","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.mirror":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.seen":"2023-10-02T10:54:43.504351845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8018 chars]
	I1002 10:58:47.054204 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.054249 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.054274 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.054296 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.058470 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:47.058529 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.058553 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.058575 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.058611 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.058636 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.058659 2249882 round_trippers.go:580]     Audit-Id: fa4ee297-fb15-477b-86c8-aad4a907a8d1
	I1002 10:58:47.058696 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.058880 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.059380 2249882 pod_ready.go:92] pod "kube-controller-manager-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.059432 2249882 pod_ready.go:81] duration metric: took 17.169351ms waiting for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.059458 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.185822 2249882 request.go:629] Waited for 126.247359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.185900 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.185913 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.185924 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.185936 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.188471 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.188496 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.188505 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.188512 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.188518 2249882 round_trippers.go:580]     Audit-Id: 5ad84008-5ddf-4525-8f99-cf53887225b9
	I1002 10:58:47.188524 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.188530 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.188537 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.188642 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"873","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1002 10:58:47.385519 2249882 request.go:629] Waited for 196.329353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.385632 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.385670 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.385698 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.385739 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.388456 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.388514 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.388528 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.388535 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.388541 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.388547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.388554 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.388568 2249882 round_trippers.go:580]     Audit-Id: d829c825-b25f-4771-8d56-bd8f6d7dc99b
	I1002 10:58:47.389059 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:47.585919 2249882 request.go:629] Waited for 196.353689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.586023 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.586033 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.586043 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.586053 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.588709 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.588780 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.588803 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.588830 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.588864 2249882 round_trippers.go:580]     Audit-Id: f04e7994-d299-401d-b2c6-a73780405388
	I1002 10:58:47.588888 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.588909 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.588931 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.589323 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"873","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1002 10:58:47.785012 2249882 request.go:629] Waited for 195.149765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.785093 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.785113 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.785123 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.785133 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.787948 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.788005 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.788027 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.788050 2249882 round_trippers.go:580]     Audit-Id: 6f6145ae-06c8-4407-a945-4f76311b2986
	I1002 10:58:47.788083 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.788109 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.788132 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.788153 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.788324 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:48.289458 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:48.289481 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.289494 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.289502 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.292154 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.292178 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.292187 2249882 round_trippers.go:580]     Audit-Id: 45123e2d-3be3-4e36-9aa7-27961e4a25c6
	I1002 10:58:48.292194 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.292200 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.292206 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.292212 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.292219 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.292483 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"890","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1002 10:58:48.292979 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:48.292996 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.293007 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.293015 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.295416 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.295455 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.295464 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.295472 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.295478 2249882 round_trippers.go:580]     Audit-Id: 0e1b3ed6-f37d-47f3-9cdd-d8b2760f0d4e
	I1002 10:58:48.295488 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.295495 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.295505 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.295602 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:48.295936 2249882 pod_ready.go:92] pod "kube-proxy-76wth" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:48.295955 2249882 pod_ready.go:81] duration metric: took 1.236458604s waiting for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.295967 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.385290 2249882 request.go:629] Waited for 89.214567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:48.385368 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:48.385380 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.385390 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.385398 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.388217 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.388238 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.388247 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.388253 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.388260 2249882 round_trippers.go:580]     Audit-Id: d5f9879f-2f51-4f2e-a5d8-ed9b7c81a336
	I1002 10:58:48.388274 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.388282 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.388292 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.388525 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fjcp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"2d159cb7-69ca-4b3c-b918-b698bb157220","resourceVersion":"712","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1002 10:58:48.585522 2249882 request.go:629] Waited for 196.337352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:48.585605 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:48.585611 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.585619 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.585633 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.588494 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.588564 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.588588 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.588610 2249882 round_trippers.go:580]     Audit-Id: fd4c27dc-924b-4f30-913f-c0c56256e5c6
	I1002 10:58:48.588632 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.588653 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.588684 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.588706 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.589047 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:48.589470 2249882 pod_ready.go:92] pod "kube-proxy-fjcp8" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:48.589488 2249882 pod_ready.go:81] duration metric: took 293.508241ms waiting for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.589500 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.785890 2249882 request.go:629] Waited for 196.32054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:48.785954 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:48.785960 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.785969 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.785976 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.788574 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.788601 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.788610 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.788618 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.788624 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.788630 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.788636 2249882 round_trippers.go:580]     Audit-Id: ab526cc8-cc6d-490d-954e-97496194efc9
	I1002 10:58:48.788643 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.788737 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xnhqd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd","resourceVersion":"846","creationTimestamp":"2023-10-02T10:56:32Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5967 chars]
	I1002 10:58:48.985574 2249882 request.go:629] Waited for 196.313492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:48.985648 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:48.985657 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.985666 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.985677 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.988277 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.988299 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.988307 2249882 round_trippers.go:580]     Audit-Id: 3691ed64-1d9a-4b07-adb3-0acd24895ded
	I1002 10:58:48.988314 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.988320 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.988326 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.988333 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.988340 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.988459 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m03","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d","resourceVersion":"845","creationTimestamp":"2023-10-02T10:56:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4552 chars]
	I1002 10:58:48.988814 2249882 pod_ready.go:97] node "multinode-899833-m03" hosting pod "kube-proxy-xnhqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-899833-m03" has status "Ready":"Unknown"
	I1002 10:58:48.988837 2249882 pod_ready.go:81] duration metric: took 399.3281ms waiting for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	E1002 10:58:48.988847 2249882 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-899833-m03" hosting pod "kube-proxy-xnhqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-899833-m03" has status "Ready":"Unknown"
	I1002 10:58:48.988860 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:49.185276 2249882 request.go:629] Waited for 196.328605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:49.185380 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:49.185420 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:49.185442 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:49.185451 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:49.188037 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:49.188059 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:49.188067 2249882 round_trippers.go:580]     Audit-Id: aa9cfc19-50c2-4802-aa98-1f998c20dd07
	I1002 10:58:49.188074 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:49.188080 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:49.188089 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:49.188096 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:49.188104 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:49 GMT
	I1002 10:58:49.188191 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899833","namespace":"kube-system","uid":"65999631-952f-42f1-ae73-f32996dc19fb","resourceVersion":"797","creationTimestamp":"2023-10-02T10:54:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.mirror":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.seen":"2023-10-02T10:54:35.990546729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1002 10:58:49.384884 2249882 request.go:629] Waited for 196.254194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:49.384961 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:49.384967 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:49.384983 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:49.384990 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:49.387516 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:49.387546 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:49.387555 2249882 round_trippers.go:580]     Audit-Id: ea48e7ff-281b-4d17-9e18-f4b25cc644e6
	I1002 10:58:49.387576 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:49.387583 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:49.387593 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:49.387599 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:49.387615 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:49 GMT
	I1002 10:58:49.387724 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:49.388116 2249882 pod_ready.go:92] pod "kube-scheduler-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:49.388132 2249882 pod_ready.go:81] duration metric: took 399.26241ms waiting for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:49.388144 2249882 pod_ready.go:38] duration metric: took 2.39968877s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:49.388166 2249882 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:58:49.388231 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:49.401375 2249882 system_svc.go:56] duration metric: took 13.198969ms WaitForService to wait for kubelet.
	I1002 10:58:49.401402 2249882 kubeadm.go:581] duration metric: took 2.438820931s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:58:49.401435 2249882 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:58:49.585826 2249882 request.go:629] Waited for 184.31738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:49.585902 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:49.585912 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:49.585938 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:49.585951 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:49.589189 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:49.589215 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:49.589225 2249882 round_trippers.go:580]     Audit-Id: e791ff26-de7b-4061-a21b-35eaba37c62f
	I1002 10:58:49.589232 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:49.589277 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:49.589293 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:49.589300 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:49.589328 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:49 GMT
	I1002 10:58:49.589570 2249882 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16123 chars]
	I1002 10:58:49.590396 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:49.590419 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:49.590429 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:49.590439 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:49.590444 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:49.590452 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:49.590457 2249882 node_conditions.go:105] duration metric: took 189.012616ms to run NodePressure ...
	I1002 10:58:49.590468 2249882 start.go:228] waiting for startup goroutines ...
	I1002 10:58:49.590493 2249882 start.go:242] writing updated cluster config ...
	I1002 10:58:49.590965 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:49.591065 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:49.595403 2249882 out.go:177] * Starting worker node multinode-899833-m03 in cluster multinode-899833
	I1002 10:58:49.597296 2249882 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:58:49.599158 2249882 out.go:177] * Pulling base image ...
	I1002 10:58:49.600939 2249882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:58:49.600981 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:58:49.601009 2249882 cache.go:57] Caching tarball of preloaded images
	I1002 10:58:49.601107 2249882 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:58:49.601125 2249882 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:58:49.601278 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:49.618660 2249882 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:58:49.618686 2249882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:58:49.618708 2249882 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:58:49.618741 2249882 start.go:365] acquiring machines lock for multinode-899833-m03: {Name:mk43e44e85df8dde2d3b8f9b294e7c14a9ba3c8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:58:49.618816 2249882 start.go:369] acquired machines lock for "multinode-899833-m03" in 50.83µs
	I1002 10:58:49.618840 2249882 start.go:96] Skipping create...Using existing machine configuration
	I1002 10:58:49.618849 2249882 fix.go:54] fixHost starting: m03
	I1002 10:58:49.619124 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m03 --format={{.State.Status}}
	I1002 10:58:49.639194 2249882 fix.go:102] recreateIfNeeded on multinode-899833-m03: state=Stopped err=<nil>
	W1002 10:58:49.639220 2249882 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 10:58:49.641576 2249882 out.go:177] * Restarting existing docker container for "multinode-899833-m03" ...
	I1002 10:58:49.643334 2249882 cli_runner.go:164] Run: docker start multinode-899833-m03
	I1002 10:58:50.020086 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m03 --format={{.State.Status}}
	I1002 10:58:50.054535 2249882 kic.go:426] container "multinode-899833-m03" state is running.
	I1002 10:58:50.054910 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m03
	I1002 10:58:50.094641 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:50.094918 2249882 machine.go:88] provisioning docker machine ...
	I1002 10:58:50.094939 2249882 ubuntu.go:169] provisioning hostname "multinode-899833-m03"
	I1002 10:58:50.094998 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:50.118187 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:50.118614 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:50.118628 2249882 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899833-m03 && echo "multinode-899833-m03" | sudo tee /etc/hostname
	I1002 10:58:50.119202 2249882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60994->127.0.0.1:35600: read: connection reset by peer
	I1002 10:58:53.278358 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899833-m03
	
	I1002 10:58:53.278453 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:53.302262 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:53.302681 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:53.302705 2249882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899833-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899833-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899833-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:58:53.446719 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:58:53.446751 2249882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:58:53.446768 2249882 ubuntu.go:177] setting up certificates
	I1002 10:58:53.446777 2249882 provision.go:83] configureAuth start
	I1002 10:58:53.446841 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m03
	I1002 10:58:53.471639 2249882 provision.go:138] copyHostCerts
	I1002 10:58:53.471681 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:53.471717 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:58:53.471731 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:53.471812 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:58:53.471895 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:53.471919 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:58:53.471923 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:53.471950 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:58:53.472028 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:53.472050 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:58:53.472055 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:53.472079 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:58:53.472122 2249882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.multinode-899833-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-899833-m03]
	I1002 10:58:55.571320 2249882 provision.go:172] copyRemoteCerts
	I1002 10:58:55.571392 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:58:55.571441 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:55.594497 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:55.696031 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:58:55.696091 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:58:55.726224 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:58:55.726285 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 10:58:55.757341 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:58:55.757404 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:58:55.787148 2249882 provision.go:86] duration metric: configureAuth took 2.34035234s
	I1002 10:58:55.787178 2249882 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:58:55.787444 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:55.787508 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:55.812106 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:55.812705 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:55.812723 2249882 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:58:55.960536 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:58:55.960559 2249882 ubuntu.go:71] root file system type: overlay
	I1002 10:58:55.960673 2249882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:58:55.960745 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:55.983369 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:55.983793 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:55.983879 2249882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	Environment="NO_PROXY=192.168.58.2,192.168.58.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:58:56.136310 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	Environment=NO_PROXY=192.168.58.2,192.168.58.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:58:56.136402 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:56.155423 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:56.155824 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:56.155848 2249882 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:58:57.127324 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-02 10:56:25.218624660 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-02 10:58:56.129812635 +0000
	@@ -12,6 +12,8 @@
	 Type=notify
	 Restart=on-failure
	 
	+Environment=NO_PROXY=192.168.58.2
	+Environment=NO_PROXY=192.168.58.2,192.168.58.3
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 10:58:57.127393 2249882 machine.go:91] provisioned docker machine in 7.032463229s
	I1002 10:58:57.127419 2249882 start.go:300] post-start starting for "multinode-899833-m03" (driver="docker")
	I1002 10:58:57.127447 2249882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:58:57.127549 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:58:57.127631 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.146671 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.249382 2249882 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:58:57.255390 2249882 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 10:58:57.255457 2249882 command_runner.go:130] > NAME="Ubuntu"
	I1002 10:58:57.255481 2249882 command_runner.go:130] > VERSION_ID="22.04"
	I1002 10:58:57.255494 2249882 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 10:58:57.255501 2249882 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 10:58:57.255505 2249882 command_runner.go:130] > ID=ubuntu
	I1002 10:58:57.255510 2249882 command_runner.go:130] > ID_LIKE=debian
	I1002 10:58:57.255516 2249882 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 10:58:57.255526 2249882 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 10:58:57.255537 2249882 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 10:58:57.255548 2249882 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 10:58:57.255556 2249882 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 10:58:57.255618 2249882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:58:57.255645 2249882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:58:57.255661 2249882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:58:57.255669 2249882 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:58:57.255679 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:58:57.255749 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:58:57.255848 2249882 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:58:57.255858 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:58:57.255973 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:58:57.268301 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:57.298852 2249882 start.go:303] post-start completed in 171.399689ms
	I1002 10:58:57.298942 2249882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:58:57.298984 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.321641 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.424519 2249882 command_runner.go:130] > 12%
	I1002 10:58:57.424644 2249882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:58:57.430906 2249882 command_runner.go:130] > 173G
	I1002 10:58:57.431273 2249882 fix.go:56] fixHost completed within 7.812419557s
	I1002 10:58:57.431313 2249882 start.go:83] releasing machines lock for "multinode-899833-m03", held for 7.812485043s
	I1002 10:58:57.431402 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m03
	I1002 10:58:57.454734 2249882 out.go:177] * Found network options:
	I1002 10:58:57.456468 2249882 out.go:177]   - NO_PROXY=192.168.58.2,192.168.58.3
	W1002 10:58:57.458502 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:57.458531 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:57.458563 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:57.458578 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 10:58:57.458649 2249882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:58:57.458693 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.458954 2249882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:58:57.459009 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.481406 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.485723 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.587457 2249882 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 10:58:57.587482 2249882 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1002 10:58:57.587491 2249882 command_runner.go:130] > Device: 100031h/1048625d	Inode: 1836318     Links: 1
	I1002 10:58:57.587498 2249882 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:58:57.587535 2249882 command_runner.go:130] > Access: 2023-10-02 10:58:50.713841886 +0000
	I1002 10:58:57.587550 2249882 command_runner.go:130] > Modify: 2023-10-02 10:56:55.858460341 +0000
	I1002 10:58:57.587557 2249882 command_runner.go:130] > Change: 2023-10-02 10:56:55.858460341 +0000
	I1002 10:58:57.587563 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:56:55.858460341 +0000
	I1002 10:58:57.588168 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:58:57.729006 2249882 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 10:58:57.732193 2249882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:58:57.732336 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:58:57.744259 2249882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 10:58:57.744286 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:57.744318 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:57.744412 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:57.766674 2249882 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 10:58:57.769429 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:58:57.782075 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:58:57.794212 2249882 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:58:57.794294 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:58:57.806609 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:57.823729 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:58:57.835229 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:57.848995 2249882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:58:57.861777 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:58:57.873618 2249882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:58:57.882659 2249882 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 10:58:57.883900 2249882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:58:57.893914 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:58.010215 2249882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:58:58.120210 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:58.120253 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:58.120319 2249882 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:58:58.137162 2249882 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1002 10:58:58.138402 2249882 command_runner.go:130] > [Unit]
	I1002 10:58:58.138422 2249882 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 10:58:58.138430 2249882 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 10:58:58.138436 2249882 command_runner.go:130] > BindsTo=containerd.service
	I1002 10:58:58.138443 2249882 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1002 10:58:58.138449 2249882 command_runner.go:130] > Wants=network-online.target
	I1002 10:58:58.138459 2249882 command_runner.go:130] > Requires=docker.socket
	I1002 10:58:58.138465 2249882 command_runner.go:130] > StartLimitBurst=3
	I1002 10:58:58.138472 2249882 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 10:58:58.138477 2249882 command_runner.go:130] > [Service]
	I1002 10:58:58.138482 2249882 command_runner.go:130] > Type=notify
	I1002 10:58:58.138493 2249882 command_runner.go:130] > Restart=on-failure
	I1002 10:58:58.138499 2249882 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1002 10:58:58.138506 2249882 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2,192.168.58.3
	I1002 10:58:58.138521 2249882 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 10:58:58.138530 2249882 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 10:58:58.138548 2249882 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 10:58:58.138564 2249882 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 10:58:58.138573 2249882 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 10:58:58.138584 2249882 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 10:58:58.138596 2249882 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 10:58:58.138608 2249882 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 10:58:58.138616 2249882 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 10:58:58.138621 2249882 command_runner.go:130] > ExecStart=
	I1002 10:58:58.138639 2249882 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1002 10:58:58.138652 2249882 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 10:58:58.138662 2249882 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 10:58:58.138674 2249882 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 10:58:58.138684 2249882 command_runner.go:130] > LimitNOFILE=infinity
	I1002 10:58:58.138693 2249882 command_runner.go:130] > LimitNPROC=infinity
	I1002 10:58:58.138698 2249882 command_runner.go:130] > LimitCORE=infinity
	I1002 10:58:58.138705 2249882 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 10:58:58.138712 2249882 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 10:58:58.138720 2249882 command_runner.go:130] > TasksMax=infinity
	I1002 10:58:58.138725 2249882 command_runner.go:130] > TimeoutStartSec=0
	I1002 10:58:58.138737 2249882 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 10:58:58.138748 2249882 command_runner.go:130] > Delegate=yes
	I1002 10:58:58.138759 2249882 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 10:58:58.138768 2249882 command_runner.go:130] > KillMode=process
	I1002 10:58:58.138772 2249882 command_runner.go:130] > [Install]
	I1002 10:58:58.138779 2249882 command_runner.go:130] > WantedBy=multi-user.target
	I1002 10:58:58.141317 2249882 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:58:58.141385 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:58:58.159230 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:58.194592 2249882 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 10:58:58.196408 2249882 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:58:58.200640 2249882 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 10:58:58.201708 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:58:58.216309 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:58:58.240331 2249882 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:58:58.387285 2249882 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:58:58.501982 2249882 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:58:58.502075 2249882 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:58:58.528652 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:58.632108 2249882 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:58:58.954354 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:59.066172 2249882 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:58:59.170119 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:59.274821 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:59.384726 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:58:59.410124 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:59.536868 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:58:59.639691 2249882 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:58:59.639810 2249882 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:58:59.644497 2249882 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 10:58:59.644522 2249882 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 10:58:59.644531 2249882 command_runner.go:130] > Device: 10003bh/1048635d	Inode: 279         Links: 1
	I1002 10:58:59.644554 2249882 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1002 10:58:59.644564 2249882 command_runner.go:130] > Access: 2023-10-02 10:58:59.553794138 +0000
	I1002 10:58:59.644590 2249882 command_runner.go:130] > Modify: 2023-10-02 10:58:59.549794160 +0000
	I1002 10:58:59.644604 2249882 command_runner.go:130] > Change: 2023-10-02 10:58:59.549794160 +0000
	I1002 10:58:59.644610 2249882 command_runner.go:130] >  Birth: -
	I1002 10:58:59.644895 2249882 start.go:537] Will wait 60s for crictl version
	I1002 10:58:59.644980 2249882 ssh_runner.go:195] Run: which crictl
	I1002 10:58:59.649583 2249882 command_runner.go:130] > /usr/bin/crictl
	I1002 10:58:59.650983 2249882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:58:59.709810 2249882 command_runner.go:130] > Version:  0.1.0
	I1002 10:58:59.709833 2249882 command_runner.go:130] > RuntimeName:  docker
	I1002 10:58:59.709840 2249882 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 10:58:59.709846 2249882 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 10:58:59.712494 2249882 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:58:59.712586 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:59.740158 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:59.741953 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:59.769637 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:59.775658 2249882 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:58:59.777376 2249882 out.go:177]   - env NO_PROXY=192.168.58.2
	I1002 10:58:59.779344 2249882 out.go:177]   - env NO_PROXY=192.168.58.2,192.168.58.3
	I1002 10:58:59.781180 2249882 cli_runner.go:164] Run: docker network inspect multinode-899833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:58:59.799195 2249882 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 10:58:59.803631 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:58:59.816424 2249882 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833 for IP: 192.168.58.4
	I1002 10:58:59.816458 2249882 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:59.816617 2249882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:58:59.816663 2249882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:58:59.816677 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:58:59.816693 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:58:59.816709 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:58:59.816720 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:58:59.816780 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:58:59.816813 2249882 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:58:59.816825 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:58:59.816850 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:58:59.816878 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:58:59.816904 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:58:59.816954 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:59.816987 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:58:59.817003 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:58:59.817014 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:59.817382 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:58:59.847492 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:58:59.877398 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:58:59.906495 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:58:59.937639 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:58:59.966131 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:58:59.995972 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:59:00.046315 2249882 ssh_runner.go:195] Run: openssl version
	I1002 10:59:00.056919 2249882 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 10:59:00.057417 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:59:00.076454 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.093299 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.093344 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.094118 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.109135 2249882 command_runner.go:130] > 51391683
	I1002 10:59:00.110391 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:59:00.127665 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:59:00.146695 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.152933 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.153345 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.153418 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.163484 2249882 command_runner.go:130] > 3ec20f2e
	I1002 10:59:00.164022 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:59:00.179346 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:59:00.193887 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.199319 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.199545 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.199613 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.209021 2249882 command_runner.go:130] > b5213941
	I1002 10:59:00.209803 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:59:00.222149 2249882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:59:00.227129 2249882 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:59:00.227430 2249882 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:59:00.227533 2249882 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:59:00.314770 2249882 command_runner.go:130] > cgroupfs
	I1002 10:59:00.316635 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:59:00.316655 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:59:00.316664 2249882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:59:00.316683 2249882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899833 NodeName:multinode-899833-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:59:00.316802 2249882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899833-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:59:00.316857 2249882 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-899833-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:59:00.316925 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:59:00.327072 2249882 command_runner.go:130] > kubeadm
	I1002 10:59:00.327144 2249882 command_runner.go:130] > kubectl
	I1002 10:59:00.327164 2249882 command_runner.go:130] > kubelet
	I1002 10:59:00.328304 2249882 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:59:00.328375 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 10:59:00.340479 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1002 10:59:00.363317 2249882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:59:00.385208 2249882 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:59:00.390069 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:59:00.404029 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:59:00.404322 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:59:00.404375 2249882 start.go:304] JoinCluster: &{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:fals
e logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPause
Interval:1m0s}
	I1002 10:59:00.404529 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 10:59:00.404625 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:59:00.423690 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:59:00.625775 2249882 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d 
	I1002 10:59:00.625833 2249882 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:00.625876 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:59:00.626270 2249882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1002 10:59:00.626336 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:59:00.649061 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:59:00.819876 2249882 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1002 10:59:00.888002 2249882 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-jbhdj, kube-system/kube-proxy-xnhqd
	I1002 10:59:03.912302 2249882 command_runner.go:130] > node/multinode-899833-m03 cordoned
	I1002 10:59:03.912328 2249882 command_runner.go:130] > pod "busybox-5bc68d56bd-zwsch" has DeletionTimestamp older than 1 seconds, skipping
	I1002 10:59:03.912336 2249882 command_runner.go:130] > node/multinode-899833-m03 drained
	I1002 10:59:03.912358 2249882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.286062073s)
	I1002 10:59:03.912374 2249882 node.go:108] successfully drained node "m03"
	I1002 10:59:03.912725 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:59:03.912988 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:59:03.913360 2249882 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1002 10:59:03.913416 2249882 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:59:03.913426 2249882 round_trippers.go:469] Request Headers:
	I1002 10:59:03.913436 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:59:03.913443 2249882 round_trippers.go:473]     Content-Type: application/json
	I1002 10:59:03.913452 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:59:03.917697 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:59:03.917719 2249882 round_trippers.go:577] Response Headers:
	I1002 10:59:03.917727 2249882 round_trippers.go:580]     Content-Length: 171
	I1002 10:59:03.917733 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:59:03 GMT
	I1002 10:59:03.917740 2249882 round_trippers.go:580]     Audit-Id: 20caf58d-57b7-4fb5-a6db-64bbd3a7be34
	I1002 10:59:03.917746 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:59:03.917753 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:59:03.917766 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:59:03.917773 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:59:03.917971 2249882 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-899833-m03","kind":"nodes","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d"}}
	I1002 10:59:03.918048 2249882 node.go:124] successfully deleted node "m03"
	I1002 10:59:03.918074 2249882 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:03.918130 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:03.918168 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:03.968197 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:04.024868 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:04.024891 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:04.024898 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:04.024904 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:04.024911 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:04.024918 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:04.024925 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:04.024937 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:04.024943 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:04.024953 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:04.024962 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:04.024969 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:04.183562 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:04.183589 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 10:59:04.209203 2249882 command_runner.go:130] ! W1002 10:59:03.967644    1719 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:04.209229 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:04.209247 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:04.209282 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:04.209292 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:04.209309 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:04.209319 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 10:59:04.209375 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:03.967644    1719 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:04.209394 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:04.209407 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:04.262481 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:04.262556 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:04.262611 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:04.262660 2249882 retry.go:31] will retry after 11.616103796s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:03.967644    1719 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:15.879205 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:15.879290 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:15.922558 2249882 command_runner.go:130] ! W1002 10:59:15.922068    2168 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:15.922657 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:15.981705 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:16.074566 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:16.074592 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:16.128542 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:16.128569 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.131598 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:16.131621 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:16.131629 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:16.131635 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:16.131642 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:16.131649 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:16.131656 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:16.131662 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:16.131669 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:16.131675 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:16.131682 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:16.131689 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:16.131695 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:16.131704 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:16.131713 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1002 10:59:16.131762 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:15.922068    2168 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.131779 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:16.131793 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:16.204334 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:16.204356 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.204411 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.204427 2249882 retry.go:31] will retry after 20.034972791s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:15.922068    2168 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.239580 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:36.239639 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:36.297751 2249882 command_runner.go:130] ! W1002 10:59:36.297387    2354 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:36.298313 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:36.353189 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:36.437536 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:36.437559 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:36.486158 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:36.486181 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.492399 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:36.492424 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:36.492432 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:36.492438 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:36.492449 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:36.492463 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:36.492472 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:36.492478 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:36.492495 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:36.492502 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:36.492523 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:36.492530 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:36.492541 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:36.492548 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:36.492557 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1002 10:59:36.492607 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:36.297387    2354 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.492623 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:36.492637 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:36.552712 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:36.552738 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.552760 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.552781 2249882 retry.go:31] will retry after 14.747204609s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:36.297387    2354 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.303178 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:51.303233 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:51.346714 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:51.417788 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:51.417815 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:51.417822 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:51.417828 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:51.417836 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:51.417842 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:51.417849 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:51.417855 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:51.417863 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:51.417872 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:51.417878 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:51.417884 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:51.539310 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:51.539334 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 10:59:51.563625 2249882 command_runner.go:130] ! W1002 10:59:51.346170    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:51.563649 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:51.563666 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:51.563673 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:51.563682 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:51.563699 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:51.563712 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 10:59:51.563763 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:51.346170    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.563779 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:51.563792 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:51.608541 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:51.608567 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.608595 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.608615 2249882 retry.go:31] will retry after 29.16686618s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:51.346170    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:20.778818 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:00:20.778874 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 11:00:20.826840 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:00:20.886333 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:00:20.886358 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:00:20.886365 2249882 command_runner.go:130] > OS: Linux
	I1002 11:00:20.886372 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 11:00:20.886383 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 11:00:20.886390 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 11:00:20.886401 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 11:00:20.886408 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 11:00:20.886414 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 11:00:20.886423 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 11:00:20.886437 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 11:00:20.886444 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 11:00:20.997494 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:00:20.997516 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:00:21.025214 2249882 command_runner.go:130] ! W1002 11:00:20.826254    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 11:00:21.025282 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:00:21.025300 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:00:21.025314 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:00:21.025324 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:00:21.025342 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 11:00:21.025357 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 11:00:21.025414 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:20.826254    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:21.025430 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 11:00:21.025444 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 11:00:21.074836 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 11:00:21.074862 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:21.074886 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:21.074902 2249882 retry.go:31] will retry after 33.544601599s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:20.826254    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.621357 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:00:54.621429 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 11:00:54.672956 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:00:54.735166 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:00:54.735200 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:00:54.735241 2249882 command_runner.go:130] > OS: Linux
	I1002 11:00:54.735249 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 11:00:54.735256 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 11:00:54.735263 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 11:00:54.735270 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 11:00:54.735276 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 11:00:54.735282 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 11:00:54.735289 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 11:00:54.735295 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 11:00:54.735302 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 11:00:54.857318 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:00:54.857342 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:00:54.885967 2249882 command_runner.go:130] ! W1002 11:00:54.672262    3023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 11:00:54.885991 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:00:54.886008 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:00:54.886018 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:00:54.886027 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:00:54.886042 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 11:00:54.886054 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 11:00:54.886097 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:54.672262    3023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.886110 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 11:00:54.886122 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 11:00:54.937392 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 11:00:54.937417 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.937440 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.937457 2249882 retry.go:31] will retry after 35.215075844s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:54.672262    3023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.153729 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:01:30.153833 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 11:01:30.203411 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:01:30.260517 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:01:30.260545 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:01:30.260553 2249882 command_runner.go:130] > OS: Linux
	I1002 11:01:30.260561 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 11:01:30.260570 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 11:01:30.260577 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 11:01:30.260583 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 11:01:30.260592 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 11:01:30.260608 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 11:01:30.260620 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 11:01:30.260629 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 11:01:30.260637 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 11:01:30.376995 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:01:30.377035 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:01:30.405361 2249882 command_runner.go:130] ! W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 11:01:30.405392 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:01:30.405411 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:01:30.405421 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:01:30.405431 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:01:30.405450 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 11:01:30.405462 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 11:01:30.405513 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.405526 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 11:01:30.405540 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 11:01:30.447767 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 11:01:30.447795 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.451281 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.451330 2249882 start.go:306] JoinCluster complete in 2m30.046955297s
	I1002 11:01:30.454789 2249882 out.go:177] 
	W1002 11:01:30.456792 2249882 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 11:01:30.456855 2249882 out.go:239] * 
	* 
	W1002 11:01:30.457800 2249882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:01:30.459957 2249882 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-linux-arm64 node list -p multinode-899833" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-899833
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-899833
helpers_test.go:235: (dbg) docker inspect multinode-899833:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0",
	        "Created": "2023-10-02T10:54:15.745937313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2250074,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T10:57:27.410038017Z",
	            "FinishedAt": "2023-10-02T10:57:14.64320985Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0/hosts",
	        "LogPath": "/var/lib/docker/containers/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0-json.log",
	        "Name": "/multinode-899833",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-899833:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-899833",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b606e1fd020613048995b9d4c0adddf10612802423f5df6bd8b4fbea51c70d5-init/diff:/var/lib/docker/overlay2/1d88af17a205d2819b1e281e265595a32e0f15f4f368d2227a6ad399b77d9a22/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b606e1fd020613048995b9d4c0adddf10612802423f5df6bd8b4fbea51c70d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b606e1fd020613048995b9d4c0adddf10612802423f5df6bd8b4fbea51c70d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b606e1fd020613048995b9d4c0adddf10612802423f5df6bd8b4fbea51c70d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-899833",
	                "Source": "/var/lib/docker/volumes/multinode-899833/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-899833",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-899833",
	                "name.minikube.sigs.k8s.io": "multinode-899833",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32ea15f42f94ddee9477f69a123221579c5feb4fe0c00d28447e8d3f7813a83f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35590"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35589"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35586"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35588"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35587"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32ea15f42f94",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-899833": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1e76ac47762c",
	                        "multinode-899833"
	                    ],
	                    "NetworkID": "dbbef4c58f7466fd7c0e268e5449d90b97dac9c61a1d73436d787dd3757d4765",
	                    "EndpointID": "2eb2a7c324e640924aa35947cf25e1d9ec6ef08547338f05e612b8c2a0c9c742",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-899833 -n multinode-899833
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-899833 logs -n 25: (2.142335219s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp multinode-899833-m02:/home/docker/cp-test.txt                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2344565154/001/cp-test_multinode-899833-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp multinode-899833-m02:/home/docker/cp-test.txt                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833:/home/docker/cp-test_multinode-899833-m02_multinode-899833.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n multinode-899833 sudo cat                                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | /home/docker/cp-test_multinode-899833-m02_multinode-899833.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp multinode-899833-m02:/home/docker/cp-test.txt                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m03:/home/docker/cp-test_multinode-899833-m02_multinode-899833-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n multinode-899833-m03 sudo cat                                   | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | /home/docker/cp-test_multinode-899833-m02_multinode-899833-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp testdata/cp-test.txt                                                | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp multinode-899833-m03:/home/docker/cp-test.txt                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2344565154/001/cp-test_multinode-899833-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp multinode-899833-m03:/home/docker/cp-test.txt                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833:/home/docker/cp-test_multinode-899833-m03_multinode-899833.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n multinode-899833 sudo cat                                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | /home/docker/cp-test_multinode-899833-m03_multinode-899833.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-899833 cp multinode-899833-m03:/home/docker/cp-test.txt                       | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m02:/home/docker/cp-test_multinode-899833-m03_multinode-899833-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n                                                                 | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | multinode-899833-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-899833 ssh -n multinode-899833-m02 sudo cat                                   | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	|         | /home/docker/cp-test_multinode-899833-m03_multinode-899833-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-899833 node stop m03                                                          | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:56 UTC |
	| node    | multinode-899833 node start                                                             | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:56 UTC | 02 Oct 23 10:57 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-899833                                                                | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:57 UTC |                     |
	| stop    | -p multinode-899833                                                                     | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:57 UTC | 02 Oct 23 10:57 UTC |
	| start   | -p multinode-899833                                                                     | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 10:57 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-899833                                                                | multinode-899833 | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:57:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:57:26.768477 2249882 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:57:26.768622 2249882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:57:26.768632 2249882 out.go:309] Setting ErrFile to fd 2...
	I1002 10:57:26.768638 2249882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:57:26.768905 2249882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:57:26.769311 2249882 out.go:303] Setting JSON to false
	I1002 10:57:26.770346 2249882 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":67194,"bootTime":1696177053,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:57:26.770426 2249882 start.go:138] virtualization:  
	I1002 10:57:26.773077 2249882 out.go:177] * [multinode-899833] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:57:26.775244 2249882 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:57:26.776994 2249882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:57:26.775488 2249882 notify.go:220] Checking for updates...
	I1002 10:57:26.781246 2249882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:57:26.783234 2249882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:57:26.784926 2249882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 10:57:26.786898 2249882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:57:26.789072 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:57:26.789231 2249882 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:57:26.813322 2249882 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:57:26.813437 2249882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:57:26.895464 2249882 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 10:57:26.885241881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:57:26.895576 2249882 docker.go:294] overlay module found
	I1002 10:57:26.897834 2249882 out.go:177] * Using the docker driver based on existing profile
	I1002 10:57:26.899393 2249882 start.go:298] selected driver: docker
	I1002 10:57:26.899410 2249882 start.go:902] validating driver "docker" against &{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:57:26.899557 2249882 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:57:26.899665 2249882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:57:26.971955 2249882 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 10:57:26.954531215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:57:26.972353 2249882 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:57:26.972382 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:57:26.972390 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:57:26.972402 2249882 start_flags.go:321] config:
	{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidi
a-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:57:26.975780 2249882 out.go:177] * Starting control plane node multinode-899833 in cluster multinode-899833
	I1002 10:57:26.977703 2249882 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:57:26.979514 2249882 out.go:177] * Pulling base image ...
	I1002 10:57:26.981623 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:57:26.981681 2249882 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 10:57:26.981698 2249882 cache.go:57] Caching tarball of preloaded images
	I1002 10:57:26.981797 2249882 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:57:26.981813 2249882 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:57:26.981954 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:57:26.982169 2249882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:57:27.014795 2249882 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:57:27.014825 2249882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:57:27.014846 2249882 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:57:27.014919 2249882 start.go:365] acquiring machines lock for multinode-899833: {Name:mk4b54e7aae7d30b0899f0f511ab22ae73c52c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:57:27.014997 2249882 start.go:369] acquired machines lock for "multinode-899833" in 45.178µs
	I1002 10:57:27.015023 2249882 start.go:96] Skipping create...Using existing machine configuration
	I1002 10:57:27.015032 2249882 fix.go:54] fixHost starting: 
	I1002 10:57:27.015306 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833 --format={{.State.Status}}
	I1002 10:57:27.036286 2249882 fix.go:102] recreateIfNeeded on multinode-899833: state=Stopped err=<nil>
	W1002 10:57:27.036327 2249882 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 10:57:27.038625 2249882 out.go:177] * Restarting existing docker container for "multinode-899833" ...
	I1002 10:57:27.040396 2249882 cli_runner.go:164] Run: docker start multinode-899833
	I1002 10:57:27.418446 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833 --format={{.State.Status}}
	I1002 10:57:27.443884 2249882 kic.go:426] container "multinode-899833" state is running.
	I1002 10:57:27.444258 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:57:27.468901 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:57:27.469140 2249882 machine.go:88] provisioning docker machine ...
	I1002 10:57:27.469160 2249882 ubuntu.go:169] provisioning hostname "multinode-899833"
	I1002 10:57:27.469212 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:27.491549 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:27.491982 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:27.492002 2249882 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899833 && echo "multinode-899833" | sudo tee /etc/hostname
	I1002 10:57:27.492707 2249882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 10:57:30.647760 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899833
	
	I1002 10:57:30.647845 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:30.666440 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:30.666852 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:30.666880 2249882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899833/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:57:30.806460 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:57:30.806488 2249882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:57:30.806523 2249882 ubuntu.go:177] setting up certificates
	I1002 10:57:30.806533 2249882 provision.go:83] configureAuth start
	I1002 10:57:30.806603 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:57:30.827352 2249882 provision.go:138] copyHostCerts
	I1002 10:57:30.827394 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:57:30.827425 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:57:30.827436 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:57:30.827516 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:57:30.827649 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:57:30.827673 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:57:30.827682 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:57:30.827715 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:57:30.827763 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:57:30.827785 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:57:30.827792 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:57:30.827818 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:57:30.827869 2249882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.multinode-899833 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-899833]
	I1002 10:57:31.107517 2249882 provision.go:172] copyRemoteCerts
	I1002 10:57:31.107590 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:57:31.107634 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.131593 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:31.231676 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:57:31.231734 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:57:31.260950 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:57:31.261030 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 10:57:31.289286 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:57:31.289345 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:57:31.317222 2249882 provision.go:86] duration metric: configureAuth took 510.650177ms
	I1002 10:57:31.317248 2249882 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:57:31.317510 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:57:31.317574 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.334897 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:31.335308 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:31.335325 2249882 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:57:31.471272 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:57:31.471294 2249882 ubuntu.go:71] root file system type: overlay
	I1002 10:57:31.471411 2249882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:57:31.471486 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.492425 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:31.492855 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:31.492939 2249882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:57:31.644068 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:57:31.644167 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.663017 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:57:31.663447 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35590 <nil> <nil>}
	I1002 10:57:31.663471 2249882 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:57:31.809123 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:57:31.809143 2249882 machine.go:91] provisioned docker machine in 4.33998987s
	I1002 10:57:31.809154 2249882 start.go:300] post-start starting for "multinode-899833" (driver="docker")
	I1002 10:57:31.809164 2249882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:57:31.809235 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:57:31.809305 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.829590 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:31.928639 2249882 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:57:31.932917 2249882 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 10:57:31.932978 2249882 command_runner.go:130] > NAME="Ubuntu"
	I1002 10:57:31.932991 2249882 command_runner.go:130] > VERSION_ID="22.04"
	I1002 10:57:31.932999 2249882 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 10:57:31.933005 2249882 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 10:57:31.933009 2249882 command_runner.go:130] > ID=ubuntu
	I1002 10:57:31.933015 2249882 command_runner.go:130] > ID_LIKE=debian
	I1002 10:57:31.933021 2249882 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 10:57:31.933031 2249882 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 10:57:31.933048 2249882 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 10:57:31.933061 2249882 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 10:57:31.933067 2249882 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 10:57:31.933126 2249882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:57:31.933155 2249882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:57:31.933169 2249882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:57:31.933181 2249882 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:57:31.933191 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:57:31.933274 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:57:31.933361 2249882 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:57:31.933374 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:57:31.933474 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:57:31.944522 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:57:31.973961 2249882 start.go:303] post-start completed in 164.777009ms
	I1002 10:57:31.974050 2249882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:57:31.974092 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:31.993542 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:32.087357 2249882 command_runner.go:130] > 12%!
	(MISSING)I1002 10:57:32.087439 2249882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:57:32.093215 2249882 command_runner.go:130] > 173G
	I1002 10:57:32.093274 2249882 fix.go:56] fixHost completed within 5.078239781s
	I1002 10:57:32.093286 2249882 start.go:83] releasing machines lock for "multinode-899833", held for 5.078277091s
	I1002 10:57:32.093382 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:57:32.110539 2249882 ssh_runner.go:195] Run: cat /version.json
	I1002 10:57:32.110596 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:32.110647 2249882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:57:32.110715 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:57:32.134165 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:32.142856 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:57:32.358018 2249882 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 10:57:32.358112 2249882 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "c590c2ca0a7db48c4b84c041c2699711a39ab56a"}
	I1002 10:57:32.358269 2249882 ssh_runner.go:195] Run: systemctl --version
	I1002 10:57:32.363476 2249882 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1002 10:57:32.363508 2249882 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 10:57:32.363871 2249882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:57:32.368809 2249882 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 10:57:32.368832 2249882 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1002 10:57:32.368840 2249882 command_runner.go:130] > Device: 36h/54d	Inode: 1835920     Links: 1
	I1002 10:57:32.368848 2249882 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:57:32.368891 2249882 command_runner.go:130] > Access: 2023-10-02 10:54:22.955277017 +0000
	I1002 10:57:32.368906 2249882 command_runner.go:130] > Modify: 2023-10-02 10:54:22.923277186 +0000
	I1002 10:57:32.368914 2249882 command_runner.go:130] > Change: 2023-10-02 10:54:22.923277186 +0000
	I1002 10:57:32.368925 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:54:22.923277186 +0000
	I1002 10:57:32.369288 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:57:32.390697 2249882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:57:32.390787 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:57:32.401404 2249882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 10:57:32.401431 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:57:32.401466 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:57:32.401569 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:57:32.419560 2249882 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 10:57:32.421071 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:57:32.432652 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:57:32.444254 2249882 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:57:32.444327 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:57:32.455927 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:57:32.467556 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:57:32.478762 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:57:32.490227 2249882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:57:32.500867 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:57:32.512455 2249882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:57:32.521344 2249882 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 10:57:32.522422 2249882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:57:32.532497 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:32.646421 2249882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:57:32.761354 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:57:32.761402 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:57:32.761461 2249882 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:57:32.782419 2249882 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1002 10:57:32.782856 2249882 command_runner.go:130] > [Unit]
	I1002 10:57:32.782881 2249882 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 10:57:32.782888 2249882 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 10:57:32.782894 2249882 command_runner.go:130] > BindsTo=containerd.service
	I1002 10:57:32.782902 2249882 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1002 10:57:32.782907 2249882 command_runner.go:130] > Wants=network-online.target
	I1002 10:57:32.782919 2249882 command_runner.go:130] > Requires=docker.socket
	I1002 10:57:32.782924 2249882 command_runner.go:130] > StartLimitBurst=3
	I1002 10:57:32.782932 2249882 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 10:57:32.782938 2249882 command_runner.go:130] > [Service]
	I1002 10:57:32.782946 2249882 command_runner.go:130] > Type=notify
	I1002 10:57:32.782955 2249882 command_runner.go:130] > Restart=on-failure
	I1002 10:57:32.782965 2249882 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 10:57:32.782983 2249882 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 10:57:32.782995 2249882 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 10:57:32.783004 2249882 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 10:57:32.783014 2249882 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 10:57:32.783022 2249882 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 10:57:32.783031 2249882 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 10:57:32.783051 2249882 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 10:57:32.783064 2249882 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 10:57:32.783069 2249882 command_runner.go:130] > ExecStart=
	I1002 10:57:32.783089 2249882 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1002 10:57:32.783098 2249882 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 10:57:32.783107 2249882 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 10:57:32.783115 2249882 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 10:57:32.783122 2249882 command_runner.go:130] > LimitNOFILE=infinity
	I1002 10:57:32.783127 2249882 command_runner.go:130] > LimitNPROC=infinity
	I1002 10:57:32.783140 2249882 command_runner.go:130] > LimitCORE=infinity
	I1002 10:57:32.783147 2249882 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 10:57:32.783158 2249882 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 10:57:32.783164 2249882 command_runner.go:130] > TasksMax=infinity
	I1002 10:57:32.783169 2249882 command_runner.go:130] > TimeoutStartSec=0
	I1002 10:57:32.783177 2249882 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 10:57:32.783185 2249882 command_runner.go:130] > Delegate=yes
	I1002 10:57:32.783192 2249882 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 10:57:32.783200 2249882 command_runner.go:130] > KillMode=process
	I1002 10:57:32.783214 2249882 command_runner.go:130] > [Install]
	I1002 10:57:32.783220 2249882 command_runner.go:130] > WantedBy=multi-user.target
	I1002 10:57:32.785023 2249882 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:57:32.785094 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:57:32.799859 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:57:32.820739 2249882 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 10:57:32.822761 2249882 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:57:32.826987 2249882 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 10:57:32.827615 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:57:32.838811 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:57:32.866902 2249882 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:57:32.989771 2249882 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:57:33.099603 2249882 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:57:33.099762 2249882 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:57:33.125952 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:33.239579 2249882 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:57:33.664818 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:57:33.769951 2249882 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:57:33.870608 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:57:33.970340 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:34.075278 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:57:34.093128 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:57:34.199243 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:57:34.295953 2249882 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:57:34.296023 2249882 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:57:34.300611 2249882 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 10:57:34.300635 2249882 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 10:57:34.300645 2249882 command_runner.go:130] > Device: 43h/67d	Inode: 231         Links: 1
	I1002 10:57:34.300654 2249882 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1002 10:57:34.300661 2249882 command_runner.go:130] > Access: 2023-10-02 10:57:34.206254272 +0000
	I1002 10:57:34.300667 2249882 command_runner.go:130] > Modify: 2023-10-02 10:57:34.206254272 +0000
	I1002 10:57:34.300673 2249882 command_runner.go:130] > Change: 2023-10-02 10:57:34.210254250 +0000
	I1002 10:57:34.300684 2249882 command_runner.go:130] >  Birth: -
	I1002 10:57:34.301016 2249882 start.go:537] Will wait 60s for crictl version
	I1002 10:57:34.301071 2249882 ssh_runner.go:195] Run: which crictl
	I1002 10:57:34.305485 2249882 command_runner.go:130] > /usr/bin/crictl
	I1002 10:57:34.305934 2249882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:57:34.358718 2249882 command_runner.go:130] > Version:  0.1.0
	I1002 10:57:34.358740 2249882 command_runner.go:130] > RuntimeName:  docker
	I1002 10:57:34.358746 2249882 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 10:57:34.358753 2249882 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 10:57:34.361283 2249882 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:57:34.361358 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:57:34.386601 2249882 command_runner.go:130] > 24.0.6
	I1002 10:57:34.387882 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:57:34.412031 2249882 command_runner.go:130] > 24.0.6
	I1002 10:57:34.417585 2249882 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:57:34.417727 2249882 cli_runner.go:164] Run: docker network inspect multinode-899833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:57:34.435497 2249882 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 10:57:34.439960 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:57:34.452968 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:57:34.453043 2249882 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:57:34.472225 2249882 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1002 10:57:34.472246 2249882 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 10:57:34.472253 2249882 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1002 10:57:34.472260 2249882 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1002 10:57:34.472266 2249882 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1002 10:57:34.472272 2249882 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1002 10:57:34.472279 2249882 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1002 10:57:34.472286 2249882 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1002 10:57:34.472293 2249882 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:57:34.472303 2249882 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1002 10:57:34.473901 2249882 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1002 10:57:34.473925 2249882 docker.go:594] Images already preloaded, skipping extraction
	I1002 10:57:34.473990 2249882 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 10:57:34.493305 2249882 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1002 10:57:34.493335 2249882 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1002 10:57:34.493343 2249882 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1002 10:57:34.493364 2249882 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 10:57:34.493371 2249882 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1002 10:57:34.493381 2249882 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1002 10:57:34.493390 2249882 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1002 10:57:34.493403 2249882 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1002 10:57:34.493410 2249882 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:57:34.493416 2249882 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1002 10:57:34.495324 2249882 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1002 10:57:34.495355 2249882 cache_images.go:84] Images are preloaded, skipping loading
	I1002 10:57:34.495445 2249882 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:57:34.564548 2249882 command_runner.go:130] > cgroupfs
	I1002 10:57:34.565837 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:57:34.565851 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:57:34.565893 2249882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:57:34.565914 2249882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899833 NodeName:multinode-899833 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:57:34.566051 2249882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899833"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:57:34.566122 2249882 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-899833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:57:34.566187 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:57:34.578173 2249882 command_runner.go:130] > kubeadm
	I1002 10:57:34.578194 2249882 command_runner.go:130] > kubectl
	I1002 10:57:34.578200 2249882 command_runner.go:130] > kubelet
	I1002 10:57:34.579368 2249882 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:57:34.579455 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 10:57:34.590525 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 10:57:34.611752 2249882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:57:34.633111 2249882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1002 10:57:34.654223 2249882 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:57:34.658635 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:57:34.672523 2249882 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833 for IP: 192.168.58.2
	I1002 10:57:34.672556 2249882 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:57:34.672722 2249882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:57:34.672776 2249882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:57:34.672862 2249882 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key
	I1002 10:57:34.672966 2249882 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.key.cee25041
	I1002 10:57:34.673020 2249882 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.key
	I1002 10:57:34.673035 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 10:57:34.673052 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 10:57:34.673075 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 10:57:34.673094 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 10:57:34.673106 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:57:34.673123 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:57:34.673147 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:57:34.673163 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:57:34.673227 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:57:34.673302 2249882 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:57:34.673319 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:57:34.673349 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:57:34.673392 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:57:34.673431 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:57:34.673489 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:57:34.673538 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:34.673555 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:57:34.673568 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:57:34.674198 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 10:57:34.702711 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 10:57:34.730794 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 10:57:34.759640 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 10:57:34.790252 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:57:34.818169 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:57:34.846612 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:57:34.875667 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:57:34.904333 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:57:34.933237 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:57:34.961882 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:57:34.990398 2249882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 10:57:35.013604 2249882 ssh_runner.go:195] Run: openssl version
	I1002 10:57:35.020823 2249882 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 10:57:35.021234 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:57:35.034353 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.039405 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.039431 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.039497 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:57:35.047903 2249882 command_runner.go:130] > b5213941
	I1002 10:57:35.048289 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:57:35.059634 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:57:35.071840 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.076656 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.076702 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.076760 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:57:35.085967 2249882 command_runner.go:130] > 51391683
	I1002 10:57:35.086057 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:57:35.098244 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:57:35.110695 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.115887 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.115919 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.115997 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:57:35.125048 2249882 command_runner.go:130] > 3ec20f2e
	I1002 10:57:35.125205 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:57:35.136624 2249882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:57:35.141117 2249882 command_runner.go:130] > ca.crt
	I1002 10:57:35.141138 2249882 command_runner.go:130] > ca.key
	I1002 10:57:35.141144 2249882 command_runner.go:130] > healthcheck-client.crt
	I1002 10:57:35.141150 2249882 command_runner.go:130] > healthcheck-client.key
	I1002 10:57:35.141156 2249882 command_runner.go:130] > peer.crt
	I1002 10:57:35.141160 2249882 command_runner.go:130] > peer.key
	I1002 10:57:35.141173 2249882 command_runner.go:130] > server.crt
	I1002 10:57:35.141180 2249882 command_runner.go:130] > server.key
	I1002 10:57:35.141323 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 10:57:35.149908 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.150289 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 10:57:35.158843 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.159258 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 10:57:35.167843 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.168264 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 10:57:35.177020 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.177501 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 10:57:35.186125 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.186517 2249882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 10:57:35.195262 2249882 command_runner.go:130] > Certificate will not expire
	I1002 10:57:35.195324 2249882 kubeadm.go:404] StartCluster: {Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 A
utoPauseInterval:1m0s}
	I1002 10:57:35.195506 2249882 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 10:57:35.216593 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 10:57:35.226419 2249882 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 10:57:35.226489 2249882 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 10:57:35.226512 2249882 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 10:57:35.226532 2249882 command_runner.go:130] > member
	I1002 10:57:35.227627 2249882 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 10:57:35.227645 2249882 kubeadm.go:636] restartCluster start
	I1002 10:57:35.227702 2249882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 10:57:35.237831 2249882 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:35.238281 2249882 kubeconfig.go:135] verify returned: extract IP: "multinode-899833" does not appear in /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:57:35.238376 2249882 kubeconfig.go:146] "multinode-899833" context is missing from /home/jenkins/minikube-integration/17340-2134307/kubeconfig - will repair!
	I1002 10:57:35.238651 2249882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/kubeconfig: {Name:mk62f5c672074becc8cade8f73c1bedcd1d2907c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:57:35.239081 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:57:35.239360 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:57:35.240252 2249882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 10:57:35.240329 2249882 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 10:57:35.251415 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:35.251536 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:35.263441 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:35.263473 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:35.263527 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:35.275752 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:35.776460 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:35.776566 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:35.788569 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:36.275953 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:36.276043 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:36.288115 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:36.776738 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:36.776831 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:36.789002 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:37.276561 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:37.276654 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:37.288780 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:37.775909 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:37.776016 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:37.787877 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:38.276554 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:38.276658 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:38.288439 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:38.776036 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:38.776122 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:38.788248 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:39.276865 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:39.276970 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:39.288857 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:39.776557 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:39.776643 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:39.788732 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:40.275942 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:40.276057 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:40.287885 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:40.776508 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:40.776595 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:40.788381 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:41.275918 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:41.276008 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:41.288287 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:41.776078 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:41.776182 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:41.788178 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:42.276754 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:42.276844 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:42.289744 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:42.775912 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:42.776019 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:42.788320 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:43.275921 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:43.276017 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:43.288423 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:43.775884 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:43.775980 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:43.788801 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:44.276477 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:44.276577 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:44.288972 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:44.776668 2249882 api_server.go:166] Checking apiserver status ...
	I1002 10:57:44.776778 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 10:57:44.789301 2249882 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:45.252038 2249882 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 10:57:45.252071 2249882 kubeadm.go:1128] stopping kube-system containers ...
	I1002 10:57:45.252152 2249882 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 10:57:45.277330 2249882 command_runner.go:130] > f0ac914e78fc
	I1002 10:57:45.277349 2249882 command_runner.go:130] > 7f68c6c1b9a9
	I1002 10:57:45.277355 2249882 command_runner.go:130] > 65189e7d31ed
	I1002 10:57:45.277360 2249882 command_runner.go:130] > 71790b749215
	I1002 10:57:45.277366 2249882 command_runner.go:130] > 9e6412863248
	I1002 10:57:45.277372 2249882 command_runner.go:130] > 4e559448cbec
	I1002 10:57:45.277377 2249882 command_runner.go:130] > 7264383872ff
	I1002 10:57:45.277382 2249882 command_runner.go:130] > 659c42600174
	I1002 10:57:45.277387 2249882 command_runner.go:130] > 584b6ab2c0e0
	I1002 10:57:45.277393 2249882 command_runner.go:130] > d027a8a33607
	I1002 10:57:45.277398 2249882 command_runner.go:130] > a82e59828796
	I1002 10:57:45.277402 2249882 command_runner.go:130] > 1bdae6fab8f9
	I1002 10:57:45.277407 2249882 command_runner.go:130] > 0beca8ac2d3b
	I1002 10:57:45.277414 2249882 command_runner.go:130] > c595b0a59f0e
	I1002 10:57:45.277419 2249882 command_runner.go:130] > 832b4901b722
	I1002 10:57:45.277423 2249882 command_runner.go:130] > 09f490c928ae
	I1002 10:57:45.277428 2249882 command_runner.go:130] > 68f88034ce87
	I1002 10:57:45.277438 2249882 command_runner.go:130] > 0db8e2ef374c
	I1002 10:57:45.277691 2249882 docker.go:463] Stopping containers: [f0ac914e78fc 7f68c6c1b9a9 65189e7d31ed 71790b749215 9e6412863248 4e559448cbec 7264383872ff 659c42600174 584b6ab2c0e0 d027a8a33607 a82e59828796 1bdae6fab8f9 0beca8ac2d3b c595b0a59f0e 832b4901b722 09f490c928ae 68f88034ce87 0db8e2ef374c]
	I1002 10:57:45.277781 2249882 ssh_runner.go:195] Run: docker stop f0ac914e78fc 7f68c6c1b9a9 65189e7d31ed 71790b749215 9e6412863248 4e559448cbec 7264383872ff 659c42600174 584b6ab2c0e0 d027a8a33607 a82e59828796 1bdae6fab8f9 0beca8ac2d3b c595b0a59f0e 832b4901b722 09f490c928ae 68f88034ce87 0db8e2ef374c
	I1002 10:57:45.303066 2249882 command_runner.go:130] > f0ac914e78fc
	I1002 10:57:45.303525 2249882 command_runner.go:130] > 7f68c6c1b9a9
	I1002 10:57:45.303724 2249882 command_runner.go:130] > 65189e7d31ed
	I1002 10:57:45.303735 2249882 command_runner.go:130] > 71790b749215
	I1002 10:57:45.303741 2249882 command_runner.go:130] > 9e6412863248
	I1002 10:57:45.303903 2249882 command_runner.go:130] > 4e559448cbec
	I1002 10:57:45.304068 2249882 command_runner.go:130] > 7264383872ff
	I1002 10:57:45.304077 2249882 command_runner.go:130] > 659c42600174
	I1002 10:57:45.304215 2249882 command_runner.go:130] > 584b6ab2c0e0
	I1002 10:57:45.304225 2249882 command_runner.go:130] > d027a8a33607
	I1002 10:57:45.304333 2249882 command_runner.go:130] > a82e59828796
	I1002 10:57:45.304606 2249882 command_runner.go:130] > 1bdae6fab8f9
	I1002 10:57:45.305075 2249882 command_runner.go:130] > 0beca8ac2d3b
	I1002 10:57:45.305912 2249882 command_runner.go:130] > c595b0a59f0e
	I1002 10:57:45.305923 2249882 command_runner.go:130] > 832b4901b722
	I1002 10:57:45.306082 2249882 command_runner.go:130] > 09f490c928ae
	I1002 10:57:45.306091 2249882 command_runner.go:130] > 68f88034ce87
	I1002 10:57:45.306556 2249882 command_runner.go:130] > 0db8e2ef374c
	I1002 10:57:45.308111 2249882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 10:57:45.324644 2249882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 10:57:45.334773 2249882 command_runner.go:130] > -rw------- 1 root root 5643 Oct  2 10:54 /etc/kubernetes/admin.conf
	I1002 10:57:45.334796 2249882 command_runner.go:130] > -rw------- 1 root root 5652 Oct  2 10:54 /etc/kubernetes/controller-manager.conf
	I1002 10:57:45.334804 2249882 command_runner.go:130] > -rw------- 1 root root 2003 Oct  2 10:54 /etc/kubernetes/kubelet.conf
	I1002 10:57:45.334813 2249882 command_runner.go:130] > -rw------- 1 root root 5604 Oct  2 10:54 /etc/kubernetes/scheduler.conf
	I1002 10:57:45.335996 2249882 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct  2 10:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct  2 10:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Oct  2 10:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct  2 10:54 /etc/kubernetes/scheduler.conf
	
	I1002 10:57:45.336100 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 10:57:45.346075 2249882 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1002 10:57:45.347334 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 10:57:45.357942 2249882 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1002 10:57:45.358020 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 10:57:45.368497 2249882 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:45.368566 2249882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 10:57:45.380096 2249882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 10:57:45.390654 2249882 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 10:57:45.390753 2249882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 10:57:45.400778 2249882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 10:57:45.411456 2249882 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 10:57:45.411482 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:45.468444 2249882 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 10:57:45.471105 2249882 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 10:57:45.471990 2249882 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 10:57:45.472699 2249882 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 10:57:45.473650 2249882 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1002 10:57:45.474286 2249882 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1002 10:57:45.474758 2249882 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1002 10:57:45.475432 2249882 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1002 10:57:45.476053 2249882 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1002 10:57:45.476620 2249882 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 10:57:45.477180 2249882 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 10:57:45.477581 2249882 command_runner.go:130] > [certs] Using the existing "sa" key
	I1002 10:57:45.480349 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:45.528573 2249882 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 10:57:45.739295 2249882 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1002 10:57:46.565891 2249882 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1002 10:57:47.154018 2249882 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 10:57:47.607732 2249882 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 10:57:47.611436 2249882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.131024828s)
	I1002 10:57:47.611466 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:47.675609 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:57:47.678169 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:57:47.678422 2249882 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 10:57:47.793144 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:47.860666 2249882 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 10:57:47.860687 2249882 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 10:57:47.873390 2249882 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 10:57:47.874423 2249882 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 10:57:47.877685 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:47.950189 2249882 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 10:57:47.957579 2249882 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:57:47.957649 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:47.974750 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:48.497247 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:48.997063 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:49.496664 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:57:49.509548 2249882 command_runner.go:130] > 1961
	I1002 10:57:49.511209 2249882 api_server.go:72] duration metric: took 1.553629429s to wait for apiserver process to appear ...
	I1002 10:57:49.511227 2249882 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:57:49.511245 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:53.197381 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 10:57:53.197406 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 10:57:53.197416 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:53.245014 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 10:57:53.245049 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 10:57:53.745692 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:53.754582 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 10:57:53.754610 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 10:57:54.245811 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:54.258506 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 10:57:54.258534 2249882 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 10:57:54.745980 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:57:54.755017 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 10:57:54.755088 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1002 10:57:54.755102 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:54.755112 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:54.755120 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:54.770255 2249882 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1002 10:57:54.770282 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:54.770291 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:54.770298 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:54.770304 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:54.770310 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:54.770317 2249882 round_trippers.go:580]     Content-Length: 263
	I1002 10:57:54.770326 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:54 GMT
	I1002 10:57:54.770332 2249882 round_trippers.go:580]     Audit-Id: 9ac7a985-84dd-49ad-986e-5586e8559991
	I1002 10:57:54.770357 2249882 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1002 10:57:54.770440 2249882 api_server.go:141] control plane version: v1.28.2
	I1002 10:57:54.770459 2249882 api_server.go:131] duration metric: took 5.259224686s to wait for apiserver health ...
	I1002 10:57:54.770467 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:57:54.770476 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:57:54.772601 2249882 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 10:57:54.774150 2249882 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 10:57:54.779163 2249882 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 10:57:54.779187 2249882 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 10:57:54.779198 2249882 command_runner.go:130] > Device: 36h/54d	Inode: 1826972     Links: 1
	I1002 10:57:54.779206 2249882 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:57:54.779213 2249882 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:57:54.779219 2249882 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:57:54.779229 2249882 command_runner.go:130] > Change: 2023-10-02 10:36:11.204484217 +0000
	I1002 10:57:54.779238 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:36:11.160484379 +0000
	I1002 10:57:54.779270 2249882 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 10:57:54.779286 2249882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 10:57:54.816074 2249882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 10:57:55.893633 2249882 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:57:55.898318 2249882 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:57:55.901892 2249882 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 10:57:55.916270 2249882 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 10:57:55.921696 2249882 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.105584462s)
	I1002 10:57:55.921747 2249882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:57:55.921829 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:57:55.921841 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:55.921850 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:55.921856 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:55.926166 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:57:55.926196 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:55.926205 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:55.926213 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:55 GMT
	I1002 10:57:55.926219 2249882 round_trippers.go:580]     Audit-Id: 98ec81ae-9f8e-40e0-ac82-2c30f3929647
	I1002 10:57:55.926225 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:55.926232 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:55.926241 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:55.927072 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"707"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"702","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85500 chars]
	I1002 10:57:55.932374 2249882 system_pods.go:59] 12 kube-system pods found
	I1002 10:57:55.932406 2249882 system_pods.go:61] "coredns-5dd5756b68-s5pf5" [f72cd720-6739-45d2-a014-97b1e19d2574] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:57:55.932416 2249882 system_pods.go:61] "etcd-multinode-899833" [50fafe88-1106-4021-9c0c-7bb9d9d17ffb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 10:57:55.932423 2249882 system_pods.go:61] "kindnet-jbhdj" [82532e9c-9f56-44a1-a627-ec7462b9738f] Running
	I1002 10:57:55.932443 2249882 system_pods.go:61] "kindnet-kp6fb" [260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 10:57:55.932457 2249882 system_pods.go:61] "kindnet-lmfm5" [8790fa37-873d-4ec3-a9b3-020dcc4a8e1d] Running
	I1002 10:57:55.932464 2249882 system_pods.go:61] "kube-apiserver-multinode-899833" [fb05b79f-58ee-4097-aa20-b9721f21d29c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 10:57:55.932473 2249882 system_pods.go:61] "kube-controller-manager-multinode-899833" [92b1c97d-b38b-405b-9e51-272591b87dcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 10:57:55.932484 2249882 system_pods.go:61] "kube-proxy-76wth" [675afe15-d632-48d5-8e1e-af889d799786] Running
	I1002 10:57:55.932492 2249882 system_pods.go:61] "kube-proxy-fjcp8" [2d159cb7-69ca-4b3c-b918-b698bb157220] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 10:57:55.932501 2249882 system_pods.go:61] "kube-proxy-xnhqd" [1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd] Running
	I1002 10:57:55.932507 2249882 system_pods.go:61] "kube-scheduler-multinode-899833" [65999631-952f-42f1-ae73-f32996dc19fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 10:57:55.932515 2249882 system_pods.go:61] "storage-provisioner" [97d5bb7f-502d-4838-a926-c613783c1588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:57:55.932526 2249882 system_pods.go:74] duration metric: took 10.770059ms to wait for pod list to return data ...
	I1002 10:57:55.932534 2249882 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:57:55.932601 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 10:57:55.932609 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:55.932617 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:55.932627 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:55.935347 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:55.935369 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:55.935377 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:55.935388 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:55.935394 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:55.935400 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:55 GMT
	I1002 10:57:55.935407 2249882 round_trippers.go:580]     Audit-Id: 27dfd28a-dcbd-4a0c-82bf-c1751b6e07cf
	I1002 10:57:55.935414 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:55.935875 2249882 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"707"},"items":[{"metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15863 chars]
	I1002 10:57:55.936893 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:57:55.936923 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:57:55.936934 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:57:55.936944 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:57:55.936949 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:57:55.936957 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:57:55.936961 2249882 node_conditions.go:105] duration metric: took 4.418741ms to run NodePressure ...
	I1002 10:57:55.936979 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 10:57:56.097301 2249882 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 10:57:56.198512 2249882 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 10:57:56.202091 2249882 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 10:57:56.202188 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1002 10:57:56.202194 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.202203 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.202210 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.206618 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:57:56.206637 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.206645 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.206652 2249882 round_trippers.go:580]     Audit-Id: 4da4d3ed-5d39-4519-9288-6ac9ca8fe820
	I1002 10:57:56.206658 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.206664 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.206670 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.206676 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.207659 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"714"},"items":[{"metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"698","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31430 chars]
	I1002 10:57:56.209150 2249882 kubeadm.go:787] kubelet initialised
	I1002 10:57:56.209194 2249882 kubeadm.go:788] duration metric: took 7.08354ms waiting for restarted kubelet to initialise ...
	I1002 10:57:56.209220 2249882 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:57:56.209327 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:57:56.209355 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.209377 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.209401 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.215518 2249882 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 10:57:56.215536 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.215544 2249882 round_trippers.go:580]     Audit-Id: e783ef9b-6eb5-4c1e-bf6d-a25c93f38237
	I1002 10:57:56.215550 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.215556 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.215562 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.215568 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.215574 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.217626 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"714"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85087 chars]
	I1002 10:57:56.221119 2249882 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:57:56.221202 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:56.221209 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.221217 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.221225 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.224561 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:57:56.224608 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.224628 2249882 round_trippers.go:580]     Audit-Id: 22f50135-0f51-4085-b064-f5a395fc1ecf
	I1002 10:57:56.224650 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.224686 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.224708 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.224729 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.224750 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.225241 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:56.225840 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:56.225875 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.225898 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.225928 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.228247 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.228285 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.228305 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.228328 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.228362 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.228383 2249882 round_trippers.go:580]     Audit-Id: 4e0c0ca6-c0fd-405d-be62-b0c025c7eecc
	I1002 10:57:56.228403 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.228423 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.228639 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:56.229086 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:56.229130 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.229151 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.229173 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.232794 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:57:56.232840 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.232863 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.232885 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.232916 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.232939 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.232960 2249882 round_trippers.go:580]     Audit-Id: 16f10759-ca9d-4927-957a-f9823e10c897
	I1002 10:57:56.232981 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.233157 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:56.233807 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:56.233844 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.233866 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.233888 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.236073 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.236114 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.236135 2249882 round_trippers.go:580]     Audit-Id: 0aabe7f9-662e-40b8-94d4-d546e9df96b7
	I1002 10:57:56.236155 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.236190 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.236212 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.236230 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.236251 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.236401 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:56.737112 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:56.737132 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.737141 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.737164 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.739617 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.739641 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.739650 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.739657 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.739663 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.739669 2249882 round_trippers.go:580]     Audit-Id: c6d0cb3e-5db2-4303-b733-ea0f19a8c3de
	I1002 10:57:56.739680 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.739686 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.739878 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:56.740413 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:56.740430 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:56.740438 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:56.740445 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:56.742587 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:56.742607 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:56.742615 2249882 round_trippers.go:580]     Audit-Id: c6d6e03f-1091-4094-881f-1e2ff28d5598
	I1002 10:57:56.742622 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:56.742628 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:56.742651 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:56.742663 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:56.742670 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:56 GMT
	I1002 10:57:56.742915 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:57.237001 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:57.237025 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.237035 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.237042 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.239704 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.239778 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.239800 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.239824 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.239860 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.239881 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.239903 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.239965 2249882 round_trippers.go:580]     Audit-Id: b377ef5a-1cf9-4f5b-bd01-187b5dce5d09
	I1002 10:57:57.240114 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:57.240671 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:57.240688 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.240697 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.240704 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.243039 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.243096 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.243116 2249882 round_trippers.go:580]     Audit-Id: 286c7bfd-5cf9-40b5-8d69-ad3059fc8fca
	I1002 10:57:57.243138 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.243173 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.243195 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.243215 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.243236 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.243387 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:57.737510 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:57.737535 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.737545 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.737552 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.740282 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.740305 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.740313 2249882 round_trippers.go:580]     Audit-Id: 9908d72d-92a6-44a9-963b-732bf8f019c7
	I1002 10:57:57.740320 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.740326 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.740332 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.740338 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.740345 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.740731 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:57.741370 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:57.741388 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:57.741399 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:57.741406 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:57.743766 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:57.743784 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:57.743791 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:57.743798 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:57.743804 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:57.743810 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:57 GMT
	I1002 10:57:57.743816 2249882 round_trippers.go:580]     Audit-Id: 1eb46903-0b4e-48e8-9ce5-1e390482547f
	I1002 10:57:57.743823 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:57.743948 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:58.237027 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:58.237051 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.237060 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.237067 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.239856 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.239881 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.239890 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.239898 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.239912 2249882 round_trippers.go:580]     Audit-Id: 9925a870-4874-4f4b-8d61-1486ed1394e2
	I1002 10:57:58.239919 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.239929 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.239941 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.240435 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:58.240978 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:58.240991 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.240999 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.241006 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.243342 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.243366 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.243374 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.243382 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.243388 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.243394 2249882 round_trippers.go:580]     Audit-Id: 4d430885-95bd-45cf-aa68-1352adc12543
	I1002 10:57:58.243405 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.243411 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.243941 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:58.244322 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:57:58.737004 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:58.737026 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.737037 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.737044 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.739645 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.739666 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.739674 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.739681 2249882 round_trippers.go:580]     Audit-Id: 18b520d8-6b80-42ae-bddc-3c5ef3a7f198
	I1002 10:57:58.739687 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.739694 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.739699 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.739706 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.739968 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:58.740540 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:58.740557 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:58.740567 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:58.740574 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:58.743004 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:58.743022 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:58.743031 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:58.743038 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:58 GMT
	I1002 10:57:58.743044 2249882 round_trippers.go:580]     Audit-Id: 561b9bee-35a5-4f48-8917-fdb8530865c3
	I1002 10:57:58.743050 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:58.743057 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:58.743063 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:58.743235 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:59.237135 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:59.237157 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.237168 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.237175 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.240119 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:59.240148 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.240157 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.240164 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.240171 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.240177 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.240184 2249882 round_trippers.go:580]     Audit-Id: 7c9892ee-14bb-452f-82b9-3f8815279e73
	I1002 10:57:59.240191 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.240313 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:59.240954 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:59.240973 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.240983 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.240991 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.245987 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:57:59.246010 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.246018 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.246024 2249882 round_trippers.go:580]     Audit-Id: 1cf2948c-6d0c-4a19-9a03-8d6878a7d405
	I1002 10:57:59.246031 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.246037 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.246043 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.246049 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.246184 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:57:59.737045 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:57:59.737070 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.737081 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.737089 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.739950 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:59.740082 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.740208 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.740222 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.740229 2249882 round_trippers.go:580]     Audit-Id: 9b6ad5f0-8394-42e6-ad03-3ceda1a221df
	I1002 10:57:59.740235 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.740254 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.740275 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.740467 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:57:59.741022 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:57:59.741039 2249882 round_trippers.go:469] Request Headers:
	I1002 10:57:59.741059 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:57:59.741067 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:57:59.743522 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:57:59.743539 2249882 round_trippers.go:577] Response Headers:
	I1002 10:57:59.743547 2249882 round_trippers.go:580]     Audit-Id: 4e860e8a-f2ec-4674-b038-1a9aa304c4a1
	I1002 10:57:59.743553 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:57:59.743565 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:57:59.743584 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:57:59.743591 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:57:59.743603 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:57:59 GMT
	I1002 10:57:59.743846 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:00.236990 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:00.237015 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.237025 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.237033 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.240291 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:00.240321 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.240343 2249882 round_trippers.go:580]     Audit-Id: e2c846ed-da4b-4ad9-a645-218760c6f7e4
	I1002 10:58:00.240350 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.240359 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.240365 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.240372 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.240381 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.240584 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:00.241135 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:00.241153 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.241162 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.241169 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.243773 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:00.243795 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.243803 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.243810 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.243816 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.243822 2249882 round_trippers.go:580]     Audit-Id: 280392ad-8049-4a65-9ddb-0cc00624e4cc
	I1002 10:58:00.243828 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.243835 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.243994 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:00.244380 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:00.737667 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:00.737702 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.737713 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.737721 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.740329 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:00.740348 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.740359 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.740366 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.740372 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.740378 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.740385 2249882 round_trippers.go:580]     Audit-Id: 531e8243-b389-49ae-a19a-37d070cd580a
	I1002 10:58:00.740391 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.740499 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:00.741049 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:00.741066 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:00.741075 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:00.741088 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:00.743323 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:00.743341 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:00.743349 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:00.743356 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:00.743362 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:00.743370 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:00 GMT
	I1002 10:58:00.743380 2249882 round_trippers.go:580]     Audit-Id: 4a3c0606-2207-4e13-b926-42c9205e0271
	I1002 10:58:00.743386 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:00.743625 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:01.237724 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:01.237746 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.237755 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.237766 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.240747 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.240773 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.240783 2249882 round_trippers.go:580]     Audit-Id: 6ca1fdf5-d511-4927-a9c7-3a7920a9db0c
	I1002 10:58:01.240790 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.240796 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.240835 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.240848 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.240855 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.241104 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:01.241726 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:01.241743 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.241752 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.241760 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.244325 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.244360 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.244369 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.244376 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.244382 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.244388 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.244394 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.244400 2249882 round_trippers.go:580]     Audit-Id: 42cce81d-bb60-42ad-b770-d7af9a70669c
	I1002 10:58:01.244534 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:01.737647 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:01.737672 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.737684 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.737692 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.740620 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.740742 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.740761 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.740769 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.740775 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.740794 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.740812 2249882 round_trippers.go:580]     Audit-Id: 46addf17-480e-4f41-bae0-c7ed80a68673
	I1002 10:58:01.740819 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.740933 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:01.741501 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:01.741519 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:01.741528 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:01.741538 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:01.743939 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:01.743966 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:01.743975 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:01.744001 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:01.744010 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:01 GMT
	I1002 10:58:01.744020 2249882 round_trippers.go:580]     Audit-Id: 70bd939e-bb94-4ff3-a01e-4a6397a07172
	I1002 10:58:01.744026 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:01.744038 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:01.744192 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:02.237217 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:02.237245 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.237290 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.237299 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.240055 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.240115 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.240145 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.240159 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.240166 2249882 round_trippers.go:580]     Audit-Id: db0be2ea-7760-4b43-b991-7092331f1993
	I1002 10:58:02.240185 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.240196 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.240203 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.240404 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:02.241038 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:02.241059 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.241068 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.241075 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.243614 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.243640 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.243702 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.243719 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.243727 2249882 round_trippers.go:580]     Audit-Id: 308414a9-6252-414b-9c41-76c1578c5d05
	I1002 10:58:02.243741 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.243748 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.243755 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.244012 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:02.244392 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:02.737099 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:02.737123 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.737134 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.737141 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.739960 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.740075 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.740090 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.740100 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.740107 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.740116 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.740125 2249882 round_trippers.go:580]     Audit-Id: c66b21f6-9180-400b-8305-78d59def8537
	I1002 10:58:02.740134 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.740264 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:02.740895 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:02.740916 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:02.740928 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:02.740936 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:02.743484 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:02.743547 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:02.743571 2249882 round_trippers.go:580]     Audit-Id: 1a566655-ad28-4942-b01e-89ae12782aad
	I1002 10:58:02.743593 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:02.743631 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:02.743653 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:02.743666 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:02.743672 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:02 GMT
	I1002 10:58:02.743803 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:03.237015 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:03.237037 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.237049 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.237056 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.239628 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.239691 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.239714 2249882 round_trippers.go:580]     Audit-Id: 92a42f5d-41aa-415d-a745-c5842fe185be
	I1002 10:58:03.239737 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.239773 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.239786 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.239794 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.239800 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.239960 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:03.240521 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:03.240540 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.240548 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.240561 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.242674 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.242709 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.242717 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.242724 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.242735 2249882 round_trippers.go:580]     Audit-Id: e9de74e4-6832-4fbd-a028-4a36a42614f3
	I1002 10:58:03.242747 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.242754 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.242768 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.242907 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:03.738012 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:03.738039 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.738049 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.738056 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.740601 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.740669 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.740754 2249882 round_trippers.go:580]     Audit-Id: f00e9cfe-b5f3-4c83-ba3e-865caa96060f
	I1002 10:58:03.740783 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.740795 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.740802 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.740809 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.740815 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.740916 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:03.741485 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:03.741505 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:03.741513 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:03.741520 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:03.743761 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:03.743779 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:03.743787 2249882 round_trippers.go:580]     Audit-Id: d7db2ecf-08dc-45a5-abfa-58b6f6877907
	I1002 10:58:03.743794 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:03.743800 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:03.743806 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:03.743813 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:03.743823 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:03 GMT
	I1002 10:58:03.744136 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:04.237620 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:04.237649 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.237659 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.237673 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.240876 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:04.240934 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.240956 2249882 round_trippers.go:580]     Audit-Id: af7c4c94-819e-4d98-87dd-e2b1549b6a7d
	I1002 10:58:04.240979 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.241016 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.241040 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.241061 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.241082 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.241228 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:04.241801 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:04.241820 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.241828 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.241835 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.244078 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:04.244127 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.244150 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.244173 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.244207 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.244223 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.244230 2249882 round_trippers.go:580]     Audit-Id: 88986ffc-b1e0-41d5-bdbe-448c765f8046
	I1002 10:58:04.244237 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.244379 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:04.244748 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:04.737246 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:04.737289 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.737299 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.737306 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.739908 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:04.739970 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.739992 2249882 round_trippers.go:580]     Audit-Id: 1d61dbb5-8f42-4384-9081-0267efaa8427
	I1002 10:58:04.740015 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.740028 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.740050 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.740058 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.740065 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.740200 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:04.740747 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:04.740762 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:04.740771 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:04.740778 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:04.743057 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:04.743079 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:04.743087 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:04.743094 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:04.743101 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:04 GMT
	I1002 10:58:04.743107 2249882 round_trippers.go:580]     Audit-Id: d88e1388-ac95-49f4-ac8c-ed76e47293b0
	I1002 10:58:04.743113 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:04.743124 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:04.743431 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:05.237570 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:05.237594 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.237604 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.237612 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.240269 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.240336 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.240353 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.240361 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.240367 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.240374 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.240380 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.240386 2249882 round_trippers.go:580]     Audit-Id: c7ad9ed8-174a-4e3a-bea5-f94e3fe0430f
	I1002 10:58:05.240568 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:05.241121 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:05.241136 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.241145 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.241152 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.243543 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.243566 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.243578 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.243585 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.243591 2249882 round_trippers.go:580]     Audit-Id: a80d0eb6-e476-47f0-9c76-98f71f404765
	I1002 10:58:05.243597 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.243607 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.243620 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.243743 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:05.737739 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:05.737772 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.737783 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.737790 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.740515 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.740539 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.740547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.740553 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.740560 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.740566 2249882 round_trippers.go:580]     Audit-Id: 6f330d73-cb69-4c0f-96a5-33c500fa2a29
	I1002 10:58:05.740572 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.740578 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.740944 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:05.741544 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:05.741563 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:05.741573 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:05.741580 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:05.743879 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:05.743942 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:05.743964 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:05.743986 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:05.744021 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:05.744049 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:05.744073 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:05 GMT
	I1002 10:58:05.744096 2249882 round_trippers.go:580]     Audit-Id: 59e0f04d-7f8e-482d-8d41-0bb84b3c5101
	I1002 10:58:05.744579 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:06.237776 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:06.237799 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.237808 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.237816 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.240295 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.240332 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.240340 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.240347 2249882 round_trippers.go:580]     Audit-Id: 956dfcbf-5eab-4e19-b386-c0a0f2d6eace
	I1002 10:58:06.240353 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.240359 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.240365 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.240372 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.240565 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:06.241222 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:06.241248 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.241281 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.241291 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.243552 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.243576 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.243587 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.243594 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.243600 2249882 round_trippers.go:580]     Audit-Id: a959105f-4af9-4097-8008-0b96ce522c3f
	I1002 10:58:06.243607 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.243616 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.243630 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.243770 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:06.737590 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:06.737615 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.737625 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.737632 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.740140 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.740166 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.740182 2249882 round_trippers.go:580]     Audit-Id: 46680136-1c11-47d4-a37b-ffcee01f8c19
	I1002 10:58:06.740189 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.740196 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.740202 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.740208 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.740218 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.740361 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:06.740903 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:06.740920 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:06.740929 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:06.740937 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:06.743240 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:06.743261 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:06.743269 2249882 round_trippers.go:580]     Audit-Id: 8f226841-d387-499e-b255-ae1f418305cc
	I1002 10:58:06.743275 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:06.743281 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:06.743296 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:06.743303 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:06.743309 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:06 GMT
	I1002 10:58:06.743448 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:06.743811 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:07.237842 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:07.237866 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.237877 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.237884 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.240379 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.240403 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.240412 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.240418 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.240425 2249882 round_trippers.go:580]     Audit-Id: 51eb0bfe-4320-4dd4-bb22-6f5b2240fe4d
	I1002 10:58:07.240431 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.240437 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.240444 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.240624 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:07.241150 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:07.241164 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.241173 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.241180 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.243555 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.243576 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.243584 2249882 round_trippers.go:580]     Audit-Id: 7ef87613-2ec3-45f1-a923-f8fab06e1def
	I1002 10:58:07.243591 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.243597 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.243603 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.243613 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.243620 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.243727 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:07.737848 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:07.737871 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.737881 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.737888 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.740589 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.740658 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.740681 2249882 round_trippers.go:580]     Audit-Id: 1af53379-9077-49e3-9657-50b75f9b7c15
	I1002 10:58:07.740704 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.740741 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.740767 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.740791 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.740819 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.740930 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:07.741511 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:07.741528 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:07.741537 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:07.741544 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:07.743957 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:07.744020 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:07.744057 2249882 round_trippers.go:580]     Audit-Id: 894c1c3a-4430-4b47-9e56-3384565e9850
	I1002 10:58:07.744116 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:07.744143 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:07.744155 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:07.744162 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:07.744182 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:07 GMT
	I1002 10:58:07.744335 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:08.237728 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:08.237830 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.237854 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.237876 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.240764 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.240831 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.240854 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.240877 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.240923 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.240965 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.241003 2249882 round_trippers.go:580]     Audit-Id: c44c431e-1dc4-4af9-8b33-06a8276292a5
	I1002 10:58:08.241027 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.241808 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:08.242575 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:08.242626 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.242650 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.242671 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.245142 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.245197 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.245219 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.245295 2249882 round_trippers.go:580]     Audit-Id: 2253c49c-b4ce-4dbd-aaf6-4a9b0c051ba8
	I1002 10:58:08.245321 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.245344 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.245368 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.245402 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.245569 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:08.737589 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:08.737612 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.737621 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.737628 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.740299 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.740380 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.740406 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.740414 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.740423 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.740430 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.740439 2249882 round_trippers.go:580]     Audit-Id: a9ce97fd-6f4f-43dd-a232-e7651d54d6f8
	I1002 10:58:08.740446 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.740553 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:08.741095 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:08.741110 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:08.741118 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:08.741125 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:08.743344 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:08.743402 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:08.743423 2249882 round_trippers.go:580]     Audit-Id: 1d16c211-6916-4567-be29-144b8b56754e
	I1002 10:58:08.743444 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:08.743471 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:08.743480 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:08.743486 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:08.743492 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:08 GMT
	I1002 10:58:08.743621 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:08.744009 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:09.237034 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:09.237057 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.237067 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.237074 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.239673 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.239754 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.239772 2249882 round_trippers.go:580]     Audit-Id: 33839bdb-43f4-428d-b5a3-5e6b3e7dd972
	I1002 10:58:09.239780 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.239786 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.239793 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.239799 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.239811 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.239967 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:09.240519 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:09.240535 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.240543 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.240550 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.242834 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.242862 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.242882 2249882 round_trippers.go:580]     Audit-Id: e2930e3d-2447-45e9-b5d4-6f408a3e417a
	I1002 10:58:09.242889 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.242896 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.242902 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.242908 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.242919 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.243063 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:09.737106 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:09.737130 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.737144 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.737153 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.740118 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.740185 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.740209 2249882 round_trippers.go:580]     Audit-Id: a16b6785-4281-4ce5-a74b-280f77c56faa
	I1002 10:58:09.740232 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.740264 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.740290 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.740375 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.740398 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.740516 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:09.741113 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:09.741138 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:09.741146 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:09.741153 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:09.743782 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:09.743804 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:09.743812 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:09.743819 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:09.743825 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:09.743832 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:09.743846 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:09 GMT
	I1002 10:58:09.743852 2249882 round_trippers.go:580]     Audit-Id: 18b2f6fb-9dc2-44df-963a-70f7f3891ff4
	I1002 10:58:09.743980 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:10.237050 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:10.237075 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.237085 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.237092 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.239797 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.239855 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.239879 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.239904 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.239939 2249882 round_trippers.go:580]     Audit-Id: 827fb503-c740-45aa-a479-58fdbf3a35f1
	I1002 10:58:10.239951 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.239958 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.239964 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.240113 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:10.240650 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:10.240665 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.240673 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.240680 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.242952 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.243012 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.243033 2249882 round_trippers.go:580]     Audit-Id: 25aa6b4f-f21a-4122-8366-e49e6d548075
	I1002 10:58:10.243054 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.243090 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.243116 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.243139 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.243177 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.243327 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:10.737422 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:10.737446 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.737455 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.737463 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.740137 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.740197 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.740212 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.740220 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.740226 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.740233 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.740239 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.740250 2249882 round_trippers.go:580]     Audit-Id: 594eb6a0-57d6-4505-8242-83e711c61e8a
	I1002 10:58:10.740549 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:10.741122 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:10.741166 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:10.741182 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:10.741189 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:10.743390 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:10.743450 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:10.743471 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:10.743514 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:10 GMT
	I1002 10:58:10.743539 2249882 round_trippers.go:580]     Audit-Id: be74786f-bcae-4290-a603-5d0f3a9d07e4
	I1002 10:58:10.743552 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:10.743559 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:10.743565 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:10.743706 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:10.744094 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:11.237034 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:11.237057 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.237066 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.237074 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.239881 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.239904 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.239913 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.239921 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.239927 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.239933 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.239939 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.239945 2249882 round_trippers.go:580]     Audit-Id: e0568670-f66c-4ce8-a381-4af93b1d24e3
	I1002 10:58:11.240038 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:11.240566 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:11.240581 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.240589 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.240596 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.242952 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.243040 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.243063 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.243097 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.243124 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.243137 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.243144 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.243150 2249882 round_trippers.go:580]     Audit-Id: 3757f889-1e43-4450-bbaf-706de8aa9ae7
	I1002 10:58:11.243277 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:11.737592 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:11.737616 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.737625 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.737633 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.740397 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.740475 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.740493 2249882 round_trippers.go:580]     Audit-Id: a412966a-41ef-469e-a606-c9a2e422169d
	I1002 10:58:11.740501 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.740507 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.740513 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.740519 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.740529 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.740627 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:11.741155 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:11.741171 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:11.741180 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:11.741188 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:11.743371 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:11.743392 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:11.743402 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:11.743409 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:11 GMT
	I1002 10:58:11.743415 2249882 round_trippers.go:580]     Audit-Id: 1cb3a278-0245-432d-9c3a-fafaa6c317d5
	I1002 10:58:11.743422 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:11.743428 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:11.743445 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:11.743673 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:12.237830 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:12.237870 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.237881 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.237888 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.240503 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.240524 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.240532 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.240538 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.240545 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.240551 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.240557 2249882 round_trippers.go:580]     Audit-Id: 7961bfe7-2822-4d2c-82ec-fc048e11af83
	I1002 10:58:12.240567 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.240724 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:12.241285 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:12.241302 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.241310 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.241318 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.243635 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.243653 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.243660 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.243667 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.243673 2249882 round_trippers.go:580]     Audit-Id: 5d70fbe1-e23e-427c-9485-4f125ff6d535
	I1002 10:58:12.243679 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.243685 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.243691 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.243985 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:12.737613 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:12.737636 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.737646 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.737653 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.740544 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.740569 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.740579 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.740586 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.740592 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.740598 2249882 round_trippers.go:580]     Audit-Id: e21e0b48-feee-4033-81ff-423e59a72eee
	I1002 10:58:12.740604 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.740610 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.740793 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:12.741377 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:12.741394 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:12.741403 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:12.741411 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:12.743532 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:12.743590 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:12.743613 2249882 round_trippers.go:580]     Audit-Id: 138d2d6e-d6e3-4a27-a260-68f66631c1bd
	I1002 10:58:12.743637 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:12.743674 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:12.743700 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:12.743722 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:12.743760 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:12 GMT
	I1002 10:58:12.743909 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:12.744285 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:13.237039 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:13.237059 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.237070 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.237078 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.239911 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.239976 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.239998 2249882 round_trippers.go:580]     Audit-Id: e1a8f57f-d406-404e-9d82-b5cac2e92919
	I1002 10:58:13.240021 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.240055 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.240084 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.240105 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.240128 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.240287 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:13.240826 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:13.240843 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.240851 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.240858 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.243386 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.243452 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.243474 2249882 round_trippers.go:580]     Audit-Id: cfa05969-ab15-40bb-8ed4-8aa338f31b54
	I1002 10:58:13.243496 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.243534 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.243547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.243554 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.243560 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.243683 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:13.737106 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:13.737173 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.737199 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.737208 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.739784 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.739900 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.739931 2249882 round_trippers.go:580]     Audit-Id: 51f80281-e6e7-455f-b717-cc5c77be1988
	I1002 10:58:13.739940 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.739947 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.739954 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.739964 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.739970 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.740066 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:13.740606 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:13.740622 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:13.740630 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:13.740637 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:13.742891 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:13.742909 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:13.742916 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:13.742924 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:13.742930 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:13 GMT
	I1002 10:58:13.742937 2249882 round_trippers.go:580]     Audit-Id: c15fe7b3-5fd9-40c5-9e69-7393d0a2ee62
	I1002 10:58:13.742943 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:13.742950 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:13.743095 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:14.237736 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:14.237761 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.237771 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.237778 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.240247 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.240281 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.240289 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.240295 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.240302 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.240308 2249882 round_trippers.go:580]     Audit-Id: e4e6a75a-c59d-4c2e-b0cd-e450781bf73f
	I1002 10:58:14.240314 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.240320 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.240518 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:14.241039 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:14.241057 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.241065 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.241073 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.243206 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.243223 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.243230 2249882 round_trippers.go:580]     Audit-Id: 5ef814ab-b1ee-4ed9-af32-3928dc2db88c
	I1002 10:58:14.243237 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.243243 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.243249 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.243255 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.243261 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.243423 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:14.737416 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:14.737441 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.737451 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.737466 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.740075 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.740144 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.740166 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.740185 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.740223 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.740251 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.740274 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.740309 2249882 round_trippers.go:580]     Audit-Id: 5c53c153-0388-4aae-b90d-117bc3e2ec9a
	I1002 10:58:14.740421 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:14.740957 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:14.740973 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:14.740982 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:14.740989 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:14.743192 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:14.743213 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:14.743221 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:14 GMT
	I1002 10:58:14.743228 2249882 round_trippers.go:580]     Audit-Id: 30fb4fd7-eb05-40fe-a0b8-d254f91bc4e6
	I1002 10:58:14.743234 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:14.743240 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:14.743246 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:14.743252 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:14.743567 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:15.237175 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:15.237201 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.237211 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.237218 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.239846 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.239921 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.239930 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.239937 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.239943 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.239949 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.239955 2249882 round_trippers.go:580]     Audit-Id: f390e515-8aa1-425b-bb93-7e5c595edf99
	I1002 10:58:15.239961 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.240079 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:15.240630 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:15.240645 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.240653 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.240660 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.243047 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.243068 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.243078 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.243084 2249882 round_trippers.go:580]     Audit-Id: 3ef698a0-4296-4b25-978b-112409f48c0c
	I1002 10:58:15.243090 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.243096 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.243101 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.243108 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.243243 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:15.243599 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:15.737371 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:15.737397 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.737407 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.737415 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.740356 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.740439 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.740532 2249882 round_trippers.go:580]     Audit-Id: bbe8ddc9-a415-4272-9f87-2cffa8127242
	I1002 10:58:15.740547 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.740555 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.740565 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.740571 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.740578 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.740685 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:15.741362 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:15.741380 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:15.741392 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:15.741403 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:15.743985 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:15.744004 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:15.744013 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:15 GMT
	I1002 10:58:15.744019 2249882 round_trippers.go:580]     Audit-Id: 33abc408-fbf8-4ea3-a43a-c15bd8929996
	I1002 10:58:15.744025 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:15.744031 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:15.744038 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:15.744044 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:15.744208 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:16.237906 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:16.237939 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.237955 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.237966 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.240809 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.240831 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.240840 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.240846 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.240855 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.240862 2249882 round_trippers.go:580]     Audit-Id: 47147a9f-8ca0-4944-8eae-9e63aaff490a
	I1002 10:58:16.240870 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.240884 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.241171 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:16.241832 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:16.241848 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.241858 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.241865 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.244039 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.244056 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.244064 2249882 round_trippers.go:580]     Audit-Id: a96203e1-cad1-41c1-a867-9b5e4ef1187f
	I1002 10:58:16.244070 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.244076 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.244083 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.244089 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.244095 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.244219 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:16.736983 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:16.737008 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.737019 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.737026 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.739505 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.739575 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.739584 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.739593 2249882 round_trippers.go:580]     Audit-Id: 4f4b6cbc-7f6c-493d-940c-987b904f63d9
	I1002 10:58:16.739599 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.739605 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.739611 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.739618 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.739715 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:16.740267 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:16.740284 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:16.740293 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:16.740301 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:16.742448 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:16.742466 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:16.742474 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:16.742480 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:16 GMT
	I1002 10:58:16.742486 2249882 round_trippers.go:580]     Audit-Id: 4b4b7d0f-302c-4e92-86c0-0162e8776bfb
	I1002 10:58:16.742492 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:16.742498 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:16.742504 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:16.742647 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:17.236935 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:17.236959 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.236969 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.236976 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.239401 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.239426 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.239435 2249882 round_trippers.go:580]     Audit-Id: 94a6ac79-5713-4be8-96c1-adef626d2f5c
	I1002 10:58:17.239441 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.239448 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.239454 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.239460 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.239467 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.239594 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:17.240129 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:17.240146 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.240154 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.240160 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.242319 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.242337 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.242345 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.242351 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.242358 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.242364 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.242370 2249882 round_trippers.go:580]     Audit-Id: ea3e5019-a17e-4774-9030-c3f583df5ec6
	I1002 10:58:17.242376 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.242553 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:17.737050 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:17.737074 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.737083 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.737090 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.739647 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.739673 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.739682 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.739689 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.739700 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.739707 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.739714 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.739721 2249882 round_trippers.go:580]     Audit-Id: 7bdfb54a-e77a-4bad-b9ea-4061ae23877b
	I1002 10:58:17.739840 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:17.740385 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:17.740399 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:17.740408 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:17.740419 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:17.742685 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:17.742703 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:17.742711 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:17.742717 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:17.742724 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:17.742730 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:17.742736 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:17 GMT
	I1002 10:58:17.742742 2249882 round_trippers.go:580]     Audit-Id: ef6d10d0-4c16-4cf0-a70c-8c91f6dc06ca
	I1002 10:58:17.742864 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:17.743228 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:18.237234 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:18.237319 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.237334 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.237350 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.239730 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.239754 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.239763 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.239770 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.239776 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.239783 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.239789 2249882 round_trippers.go:580]     Audit-Id: 0113e1e1-56db-4a3d-a9b4-a5a3aea4042f
	I1002 10:58:18.239796 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.240034 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:18.240591 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:18.240609 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.240620 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.240629 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.242881 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.242902 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.242909 2249882 round_trippers.go:580]     Audit-Id: 443f1ed4-b952-44a5-8ef4-947d58b9bd1b
	I1002 10:58:18.242917 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.242923 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.242929 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.242935 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.242944 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.243120 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:18.737041 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:18.737062 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.737071 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.737078 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.739712 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.739736 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.739744 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.739750 2249882 round_trippers.go:580]     Audit-Id: 009dea86-f617-4790-aec8-03ef6a252a5a
	I1002 10:58:18.739756 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.739763 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.739769 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.739781 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.739920 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:18.740449 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:18.740467 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:18.740475 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:18.740482 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:18.742702 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:18.742721 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:18.742729 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:18.742735 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:18.742742 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:18 GMT
	I1002 10:58:18.742748 2249882 round_trippers.go:580]     Audit-Id: 568a3f09-7d9b-4cd7-be0f-132a91fb4100
	I1002 10:58:18.742754 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:18.742760 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:18.742890 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:19.237814 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:19.237838 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.237848 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.237855 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.240430 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.240460 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.240476 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.240483 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.240490 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.240497 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.240508 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.240518 2249882 round_trippers.go:580]     Audit-Id: d56f239e-3f29-4f4a-880c-4818fe58c493
	I1002 10:58:19.240719 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:19.241283 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:19.241295 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.241304 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.241310 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.243599 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.243618 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.243626 2249882 round_trippers.go:580]     Audit-Id: eb2c858f-2973-4607-bdf6-fd1cd95c5c69
	I1002 10:58:19.243632 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.243639 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.243645 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.243651 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.243657 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.243770 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:19.737600 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:19.737621 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.737634 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.737641 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.740116 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.740141 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.740150 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.740157 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.740164 2249882 round_trippers.go:580]     Audit-Id: d3c18718-75cb-43ac-84c5-91cdd167bd94
	I1002 10:58:19.740170 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.740180 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.740186 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.740463 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:19.741001 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:19.741018 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:19.741027 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:19.741035 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:19.743289 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:19.743346 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:19.743369 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:19.743392 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:19.743428 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:19.743453 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:19 GMT
	I1002 10:58:19.743475 2249882 round_trippers.go:580]     Audit-Id: f923aed8-b6f4-4903-a0a5-28872916af38
	I1002 10:58:19.743513 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:19.743675 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:19.744083 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:20.237113 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:20.237137 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.237147 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.237155 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.239916 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.239986 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.240008 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.240030 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.240062 2249882 round_trippers.go:580]     Audit-Id: e8104e7c-c03f-4caf-96a1-c58c6d6d8e56
	I1002 10:58:20.240072 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.240078 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.240085 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.240208 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:20.240741 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:20.240759 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.240767 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.240774 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.243043 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.243067 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.243075 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.243082 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.243088 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.243094 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.243101 2249882 round_trippers.go:580]     Audit-Id: bd3072fe-e7e3-4482-aa04-98c8b6c488c3
	I1002 10:58:20.243107 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.243323 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:20.737028 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:20.737053 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.737062 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.737069 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.739547 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.739569 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.739577 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.739584 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.739590 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.739596 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.739606 2249882 round_trippers.go:580]     Audit-Id: a4b9a133-ce48-4244-ba4a-041549bf288d
	I1002 10:58:20.739613 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.739927 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:20.740495 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:20.740509 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:20.740517 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:20.740524 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:20.742656 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:20.742674 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:20.742682 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:20.742688 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:20.742695 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:20 GMT
	I1002 10:58:20.742702 2249882 round_trippers.go:580]     Audit-Id: 5cf11ef2-8888-4a79-9f3a-3a80f61c46cd
	I1002 10:58:20.742711 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:20.742717 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:20.742908 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:21.237695 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:21.237717 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.237727 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.237734 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.240260 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.240281 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.240290 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.240296 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.240304 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.240310 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.240316 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.240323 2249882 round_trippers.go:580]     Audit-Id: d4db00bc-8449-414a-8343-c7136fdf75ef
	I1002 10:58:21.240454 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:21.240985 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:21.240995 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.241004 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.241010 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.243166 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.243185 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.243193 2249882 round_trippers.go:580]     Audit-Id: 1de028d2-3986-4493-aab0-4c99dfae4c91
	I1002 10:58:21.243199 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.243205 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.243211 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.243217 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.243225 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.243369 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:21.737525 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:21.737549 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.737559 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.737566 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.740250 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.740319 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.740358 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.740373 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.740380 2249882 round_trippers.go:580]     Audit-Id: c7a11823-3a1f-43dd-9a98-af8ef8f3df1f
	I1002 10:58:21.740387 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.740393 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.740401 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.740517 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:21.741054 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:21.741070 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:21.741078 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:21.741086 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:21.743344 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:21.743368 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:21.743377 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:21.743383 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:21.743389 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:21.743405 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:21.743414 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:21 GMT
	I1002 10:58:21.743420 2249882 round_trippers.go:580]     Audit-Id: 741873c5-325f-418d-a6dd-39196e08d315
	I1002 10:58:21.743541 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:22.237790 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:22.237815 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.237828 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.237836 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.241175 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:22.241197 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.241207 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.241214 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.241220 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.241226 2249882 round_trippers.go:580]     Audit-Id: c718ba5e-197e-4887-8064-3fb27c840671
	I1002 10:58:22.241232 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.241238 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.241435 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:22.241997 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:22.242015 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.242024 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.242033 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.244338 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:22.244358 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.244367 2249882 round_trippers.go:580]     Audit-Id: 8d8b51c2-368f-4ba7-992f-0eec0fa72476
	I1002 10:58:22.244373 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.244380 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.244386 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.244393 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.244399 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.244519 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:22.244899 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:22.737780 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:22.737811 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.737826 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.737839 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.740520 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:22.740546 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.740554 2249882 round_trippers.go:580]     Audit-Id: f98abc6f-169b-4456-858c-74059c452b89
	I1002 10:58:22.740564 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.740671 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.740699 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.740706 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.740729 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.740882 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:22.741581 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:22.741598 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:22.741607 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:22.741614 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:22.743881 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:22.743906 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:22.743914 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:22.743920 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:22.743927 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:22.743934 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:22.743940 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:22 GMT
	I1002 10:58:22.743951 2249882 round_trippers.go:580]     Audit-Id: 0e370485-a24b-454a-b2b9-079bf8420451
	I1002 10:58:22.744145 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:23.237922 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:23.237963 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.237972 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.237979 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.240492 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:23.240511 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.240518 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.240525 2249882 round_trippers.go:580]     Audit-Id: 38d88d34-3f6e-4ac5-a47c-f1ddde346844
	I1002 10:58:23.240531 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.240536 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.240543 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.240552 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.240686 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:23.241284 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:23.241297 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.241305 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.241312 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.243483 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:23.243499 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.243545 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.243562 2249882 round_trippers.go:580]     Audit-Id: 90e79152-6a7d-45b9-b7bd-11441565bd82
	I1002 10:58:23.243568 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.243575 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.243581 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.243615 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.243752 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:23.737059 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:23.737085 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.737095 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.737102 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.740012 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:23.740037 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.740046 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.740053 2249882 round_trippers.go:580]     Audit-Id: 68a42a4f-29be-47d2-b854-541ca1499db7
	I1002 10:58:23.740059 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.740091 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.740097 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.740103 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.740286 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:23.741368 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:23.741382 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:23.741400 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:23.741408 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:23.747710 2249882 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 10:58:23.747738 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:23.747746 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:23.747753 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:23 GMT
	I1002 10:58:23.747759 2249882 round_trippers.go:580]     Audit-Id: 64653b45-596c-4cb6-be33-af73746aad86
	I1002 10:58:23.747765 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:23.747772 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:23.747781 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:23.747897 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:24.237459 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:24.237485 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.237495 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.237502 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.240253 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.240288 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.240297 2249882 round_trippers.go:580]     Audit-Id: c355c40c-1d18-49c0-9b4d-2e78ca3e39e5
	I1002 10:58:24.240304 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.240310 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.240317 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.240323 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.240329 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.240515 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:24.241060 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:24.241078 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.241087 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.241095 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.243428 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.243455 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.243463 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.243469 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.243475 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.243484 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.243493 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.243500 2249882 round_trippers.go:580]     Audit-Id: 2b42de9c-3541-4345-a457-0f080f60de97
	I1002 10:58:24.243653 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:24.737422 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:24.737447 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.737458 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.737465 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.740192 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.740273 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.740291 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.740301 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.740307 2249882 round_trippers.go:580]     Audit-Id: 15b161c3-bf37-4135-96bf-e9bddea1aafd
	I1002 10:58:24.740314 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.740336 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.740350 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.740550 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:24.741118 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:24.741136 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:24.741145 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:24.741152 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:24.743465 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:24.743519 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:24.743538 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:24.743546 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:24.743552 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:24 GMT
	I1002 10:58:24.743558 2249882 round_trippers.go:580]     Audit-Id: f408aef3-fbd9-4aeb-8e31-d16676b1c186
	I1002 10:58:24.743564 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:24.743570 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:24.743748 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:24.744187 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:25.237854 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:25.237878 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.237888 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.237899 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.240525 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.240600 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.240624 2249882 round_trippers.go:580]     Audit-Id: 0704ed7a-2119-4c4b-a4c7-2769eba29398
	I1002 10:58:25.240637 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.240659 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.240674 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.240681 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.240690 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.240877 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:25.241436 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:25.241453 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.241462 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.241470 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.243654 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.243670 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.243677 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.243684 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.243690 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.243696 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.243703 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.243708 2249882 round_trippers.go:580]     Audit-Id: 0483effd-0be5-49d8-80b6-cf238b32fe6c
	I1002 10:58:25.243807 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:25.737394 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:25.737416 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.737426 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.737433 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.740157 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.740183 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.740194 2249882 round_trippers.go:580]     Audit-Id: abe8146b-0a00-4b71-b3bd-0837de335c06
	I1002 10:58:25.740201 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.740208 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.740214 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.740220 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.740227 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.740516 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:25.741061 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:25.741078 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:25.741088 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:25.741095 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:25.743546 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:25.743606 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:25.743628 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:25.743651 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:25 GMT
	I1002 10:58:25.743710 2249882 round_trippers.go:580]     Audit-Id: 3128dbda-f7c2-408b-8547-d1f6e25fe687
	I1002 10:58:25.743735 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:25.743754 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:25.743776 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:25.743916 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:26.237459 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:26.237487 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.237505 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.237512 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.243193 2249882 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 10:58:26.243216 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.243224 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.243241 2249882 round_trippers.go:580]     Audit-Id: 13a61fbf-181a-4453-b22e-c29abee89c98
	I1002 10:58:26.243251 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.243263 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.243270 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.243279 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.243511 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:26.244264 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:26.244279 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.244288 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.244297 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.247074 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:26.247096 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.247110 2249882 round_trippers.go:580]     Audit-Id: bac60b7c-845d-4e73-bd4d-d7dbddac34a7
	I1002 10:58:26.247120 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.247126 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.247136 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.247143 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.247157 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.247596 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:26.737176 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:26.737202 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.737212 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.737227 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.740054 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:26.740078 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.740086 2249882 round_trippers.go:580]     Audit-Id: ed46171d-5f41-437d-8732-69c521f932b6
	I1002 10:58:26.740093 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.740099 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.740105 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.740112 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.740125 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.740285 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:26.740805 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:26.740821 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:26.740829 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:26.740837 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:26.743103 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:26.743134 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:26.743144 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:26.743151 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:26.743157 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:26.743164 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:26 GMT
	I1002 10:58:26.743173 2249882 round_trippers.go:580]     Audit-Id: 00e0cb97-efbb-4dcb-b602-1f5a28977ce5
	I1002 10:58:26.743184 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:26.743319 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:27.237728 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:27.237757 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.237767 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.237774 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.240277 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.240304 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.240313 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.240319 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.240327 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.240333 2249882 round_trippers.go:580]     Audit-Id: 0812de73-ef1b-4915-a5aa-f688e1c6532c
	I1002 10:58:27.240339 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.240348 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.240462 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:27.240999 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:27.241015 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.241025 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.241036 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.243159 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.243183 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.243192 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.243203 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.243211 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.243219 2249882 round_trippers.go:580]     Audit-Id: e7c43696-232f-4957-b68f-144e6174e700
	I1002 10:58:27.243228 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.243235 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.243497 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:27.243909 2249882 pod_ready.go:102] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"False"
	I1002 10:58:27.737055 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:27.737076 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.737086 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.737093 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.739904 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.739980 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.740003 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.740025 2249882 round_trippers.go:580]     Audit-Id: 8aaf0ea5-bbab-47a8-9d80-2d106b57ff76
	I1002 10:58:27.740067 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.740090 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.740111 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.740146 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.740308 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"711","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6382 chars]
	I1002 10:58:27.740946 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:27.740966 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:27.740975 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:27.740984 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:27.743443 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:27.743466 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:27.743475 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:27.743481 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:27 GMT
	I1002 10:58:27.743488 2249882 round_trippers.go:580]     Audit-Id: bcb4bd71-51bc-4c62-a171-444a1729fa7f
	I1002 10:58:27.743494 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:27.743500 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:27.743510 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:27.743636 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.237567 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:28.237592 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.237602 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.237609 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.240487 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.240508 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.240529 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.240536 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.240557 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.240569 2249882 round_trippers.go:580]     Audit-Id: 95069a40-fbfe-4e9f-bdcb-8edb5e4a1173
	I1002 10:58:28.240576 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.240587 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.241056 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1002 10:58:28.241608 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.241626 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.241635 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.241642 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.244124 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.244147 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.244155 2249882 round_trippers.go:580]     Audit-Id: 3f960e4f-ea87-4d90-80ad-f8df7bd8ca68
	I1002 10:58:28.244162 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.244168 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.244174 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.244180 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.244186 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.244446 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.244830 2249882 pod_ready.go:92] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.244850 2249882 pod_ready.go:81] duration metric: took 32.023712844s waiting for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.244861 2249882 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.244918 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:28.244928 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.244936 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.244943 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.247268 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.247299 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.247307 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.247318 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.247327 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.247334 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.247344 2249882 round_trippers.go:580]     Audit-Id: 8e1df7ce-bbf6-433a-b698-ff3400f11347
	I1002 10:58:28.247350 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.247484 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"780","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1002 10:58:28.247947 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.247970 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.247978 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.247985 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.250100 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.250123 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.250131 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.250138 2249882 round_trippers.go:580]     Audit-Id: eadcd4d8-b5a0-4d3b-a9c4-553512d15155
	I1002 10:58:28.250144 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.250151 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.250164 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.250173 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.250376 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.250729 2249882 pod_ready.go:92] pod "etcd-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.250743 2249882 pod_ready.go:81] duration metric: took 5.875193ms waiting for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.250765 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.250824 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:28.250832 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.250840 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.250847 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.256794 2249882 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 10:58:28.256816 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.256825 2249882 round_trippers.go:580]     Audit-Id: 3d821cb9-2a36-4b80-9d75-8d0c3983777f
	I1002 10:58:28.256832 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.256838 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.256847 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.256856 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.256874 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.257524 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899833","namespace":"kube-system","uid":"fb05b79f-58ee-4097-aa20-b9721f21d29c","resourceVersion":"785","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.mirror":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.seen":"2023-10-02T10:54:43.504350548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8445 chars]
	I1002 10:58:28.258150 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.258168 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.258177 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.258188 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.262752 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:28.262778 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.262788 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.262802 2249882 round_trippers.go:580]     Audit-Id: 57f0bec4-3d3a-4c46-932d-d7ddc317eee8
	I1002 10:58:28.262817 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.262824 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.262837 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.262846 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.263254 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.263693 2249882 pod_ready.go:92] pod "kube-apiserver-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.263710 2249882 pod_ready.go:81] duration metric: took 12.932164ms waiting for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.263722 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.263790 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:28.263806 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.263815 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.263823 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.266053 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.266115 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.266156 2249882 round_trippers.go:580]     Audit-Id: 15bab521-385b-4d25-a95f-a54f951ca4f1
	I1002 10:58:28.266167 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.266175 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.266181 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.266187 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.266194 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.266579 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899833","namespace":"kube-system","uid":"92b1c97d-b38b-405b-9e51-272591b87dcf","resourceVersion":"798","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.mirror":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.seen":"2023-10-02T10:54:43.504351845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8018 chars]
	I1002 10:58:28.267106 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.267122 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.267131 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.267139 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.269746 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.269767 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.269776 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.269782 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.269788 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.269795 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.269804 2249882 round_trippers.go:580]     Audit-Id: ec65ca76-d4ea-4121-8121-a02debbd92b4
	I1002 10:58:28.269810 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.270769 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.271147 2249882 pod_ready.go:92] pod "kube-controller-manager-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.271165 2249882 pod_ready.go:81] duration metric: took 7.435784ms waiting for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.271180 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.271240 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:28.271251 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.271259 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.271272 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.273551 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.273572 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.273580 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.273587 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.273593 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.273600 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.273606 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.273615 2249882 round_trippers.go:580]     Audit-Id: 3939e445-8ced-4a73-941e-bf6123a1fe9b
	I1002 10:58:28.274091 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"473","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I1002 10:58:28.274551 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:28.274567 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.274577 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.274583 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.276861 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.276879 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.276888 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.276894 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.276900 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.276906 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.276915 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.276921 2249882 round_trippers.go:580]     Audit-Id: aa3fc13f-d81c-40ab-ae1f-389edce9bb9d
	I1002 10:58:28.277440 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"fae5cedd-05b9-4641-a9c0-540d8cb0740c","resourceVersion":"540","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4461 chars]
	I1002 10:58:28.277774 2249882 pod_ready.go:92] pod "kube-proxy-76wth" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.277791 2249882 pod_ready.go:81] duration metric: took 6.604846ms waiting for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.277803 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.438152 2249882 request.go:629] Waited for 160.280776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:28.438231 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:28.438245 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.438254 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.438262 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.440810 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.440830 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.440838 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.440844 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.440850 2249882 round_trippers.go:580]     Audit-Id: d1508304-8169-4442-9182-189cb92c322c
	I1002 10:58:28.440856 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.440867 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.440873 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.440990 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fjcp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"2d159cb7-69ca-4b3c-b918-b698bb157220","resourceVersion":"712","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1002 10:58:28.637767 2249882 request.go:629] Waited for 196.258396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.637849 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:28.637860 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.637869 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.637876 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.640486 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.640511 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.640520 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.640526 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.640533 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.640539 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.640545 2249882 round_trippers.go:580]     Audit-Id: f4f451e7-4070-4038-a469-4f4599fc41bf
	I1002 10:58:28.640564 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.640674 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:28.641079 2249882 pod_ready.go:92] pod "kube-proxy-fjcp8" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:28.641095 2249882 pod_ready.go:81] duration metric: took 363.279189ms waiting for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.641106 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:28.838456 2249882 request.go:629] Waited for 197.267719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:28.838515 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:28.838520 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:28.838535 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:28.838543 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:28.841182 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:28.841219 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:28.841230 2249882 round_trippers.go:580]     Audit-Id: 254f8faa-7eb5-4400-90bd-f1405d24be44
	I1002 10:58:28.841236 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:28.841242 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:28.841248 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:28.841282 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:28.841289 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:28 GMT
	I1002 10:58:28.841411 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xnhqd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd","resourceVersion":"688","creationTimestamp":"2023-10-02T10:56:32Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1002 10:58:29.038276 2249882 request.go:629] Waited for 196.337017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:29.038393 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:29.038406 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.038416 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.038424 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.041039 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.041116 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.041139 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.041203 2249882 round_trippers.go:580]     Audit-Id: 2ffe88df-baeb-40bd-b4bd-4756157aa1cc
	I1002 10:58:29.041228 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.041274 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.041296 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.041308 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.041406 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m03","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d","resourceVersion":"670","creationTimestamp":"2023-10-02T10:56:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4075 chars]
	I1002 10:58:29.041778 2249882 pod_ready.go:92] pod "kube-proxy-xnhqd" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:29.041795 2249882 pod_ready.go:81] duration metric: took 400.683057ms waiting for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:29.041806 2249882 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:29.238242 2249882 request.go:629] Waited for 196.362058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:29.238347 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:29.238362 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.238372 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.238383 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.241126 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.241203 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.241221 2249882 round_trippers.go:580]     Audit-Id: f542fb8e-c0ab-4861-abbf-e494b31795af
	I1002 10:58:29.241233 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.241240 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.241271 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.241303 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.241314 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.241448 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899833","namespace":"kube-system","uid":"65999631-952f-42f1-ae73-f32996dc19fb","resourceVersion":"797","creationTimestamp":"2023-10-02T10:54:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.mirror":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.seen":"2023-10-02T10:54:35.990546729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1002 10:58:29.438293 2249882 request.go:629] Waited for 196.332389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.438378 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.438390 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.438399 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.438406 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.440885 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.440909 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.440928 2249882 round_trippers.go:580]     Audit-Id: b8d8b6fa-9dec-49f5-88be-12bbe6151e94
	I1002 10:58:29.440935 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.440941 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.440947 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.440958 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.440968 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.441084 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:29.441504 2249882 pod_ready.go:92] pod "kube-scheduler-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:29.441522 2249882 pod_ready.go:81] duration metric: took 399.708925ms waiting for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:29.441539 2249882 pod_ready.go:38] duration metric: took 33.232289405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:29.441560 2249882 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 10:58:29.451298 2249882 command_runner.go:130] > -16
	I1002 10:58:29.451368 2249882 ops.go:34] apiserver oom_adj: -16
	I1002 10:58:29.451379 2249882 kubeadm.go:640] restartCluster took 54.223726706s
	I1002 10:58:29.451389 2249882 kubeadm.go:406] StartCluster complete in 54.256070409s
	I1002 10:58:29.451405 2249882 settings.go:142] acquiring lock: {Name:mk7b49767935c15b5f90083e95558323a1cf0ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:29.451479 2249882 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:29.452136 2249882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-2134307/kubeconfig: {Name:mk62f5c672074becc8cade8f73c1bedcd1d2907c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:29.452351 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 10:58:29.452618 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:29.452655 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:29.452765 2249882 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 10:58:29.457001 2249882 out.go:177] * Enabled addons: 
	I1002 10:58:29.452887 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:29.458728 2249882 addons.go:502] enable addons completed in 5.954093ms: enabled=[]
	I1002 10:58:29.459057 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 10:58:29.459067 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.459076 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.459083 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.461928 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.461946 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.461955 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.461961 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.461972 2249882 round_trippers.go:580]     Content-Length: 291
	I1002 10:58:29.461980 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.461987 2249882 round_trippers.go:580]     Audit-Id: 8228a7b4-3c8b-4231-8f38-e79ce7f7a709
	I1002 10:58:29.461992 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.461998 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.462243 2249882 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b08b27fb-9d04-4b90-bfa5-b624291dfc83","resourceVersion":"813","creationTimestamp":"2023-10-02T10:54:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 10:58:29.462415 2249882 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899833" context rescaled to 1 replicas
	I1002 10:58:29.462447 2249882 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 10:58:29.464385 2249882 out.go:177] * Verifying Kubernetes components...
	I1002 10:58:29.466141 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:29.568672 2249882 command_runner.go:130] > apiVersion: v1
	I1002 10:58:29.568694 2249882 command_runner.go:130] > data:
	I1002 10:58:29.568700 2249882 command_runner.go:130] >   Corefile: |
	I1002 10:58:29.568705 2249882 command_runner.go:130] >     .:53 {
	I1002 10:58:29.568710 2249882 command_runner.go:130] >         log
	I1002 10:58:29.568716 2249882 command_runner.go:130] >         errors
	I1002 10:58:29.568721 2249882 command_runner.go:130] >         health {
	I1002 10:58:29.568726 2249882 command_runner.go:130] >            lameduck 5s
	I1002 10:58:29.568731 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568737 2249882 command_runner.go:130] >         ready
	I1002 10:58:29.568749 2249882 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 10:58:29.568754 2249882 command_runner.go:130] >            pods insecure
	I1002 10:58:29.568764 2249882 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 10:58:29.568769 2249882 command_runner.go:130] >            ttl 30
	I1002 10:58:29.568776 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568782 2249882 command_runner.go:130] >         prometheus :9153
	I1002 10:58:29.568790 2249882 command_runner.go:130] >         hosts {
	I1002 10:58:29.568796 2249882 command_runner.go:130] >            192.168.58.1 host.minikube.internal
	I1002 10:58:29.568801 2249882 command_runner.go:130] >            fallthrough
	I1002 10:58:29.568812 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568822 2249882 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 10:58:29.568827 2249882 command_runner.go:130] >            max_concurrent 1000
	I1002 10:58:29.568832 2249882 command_runner.go:130] >         }
	I1002 10:58:29.568837 2249882 command_runner.go:130] >         cache 30
	I1002 10:58:29.568846 2249882 command_runner.go:130] >         loop
	I1002 10:58:29.568851 2249882 command_runner.go:130] >         reload
	I1002 10:58:29.568857 2249882 command_runner.go:130] >         loadbalance
	I1002 10:58:29.568864 2249882 command_runner.go:130] >     }
	I1002 10:58:29.568869 2249882 command_runner.go:130] > kind: ConfigMap
	I1002 10:58:29.568876 2249882 command_runner.go:130] > metadata:
	I1002 10:58:29.568882 2249882 command_runner.go:130] >   creationTimestamp: "2023-10-02T10:54:43Z"
	I1002 10:58:29.568887 2249882 command_runner.go:130] >   name: coredns
	I1002 10:58:29.568892 2249882 command_runner.go:130] >   namespace: kube-system
	I1002 10:58:29.568900 2249882 command_runner.go:130] >   resourceVersion: "370"
	I1002 10:58:29.568906 2249882 command_runner.go:130] >   uid: fc76aacb-6ec1-4746-ae20-712369e5fc29
	I1002 10:58:29.568941 2249882 node_ready.go:35] waiting up to 6m0s for node "multinode-899833" to be "Ready" ...
	I1002 10:58:29.569069 2249882 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 10:58:29.638205 2249882 request.go:629] Waited for 69.1805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.638264 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:29.638274 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.638286 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.638295 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.640716 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:29.640742 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.640752 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.640758 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.640765 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.640772 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.640778 2249882 round_trippers.go:580]     Audit-Id: 745e361b-1230-4e00-9087-1448ad59a473
	I1002 10:58:29.640785 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.640886 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:29.641297 2249882 node_ready.go:49] node "multinode-899833" has status "Ready":"True"
	I1002 10:58:29.641316 2249882 node_ready.go:38] duration metric: took 72.359043ms waiting for node "multinode-899833" to be "Ready" ...
	I1002 10:58:29.641330 2249882 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:29.837639 2249882 request.go:629] Waited for 196.230769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:29.837706 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:29.837717 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:29.837728 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:29.837738 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:29.841900 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:29.841988 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:29.842006 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:29.842014 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:29.842021 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:29 GMT
	I1002 10:58:29.842027 2249882 round_trippers.go:580]     Audit-Id: c06c650d-bc3c-4e3a-8731-e6a7b19eddf7
	I1002 10:58:29.842052 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:29.842065 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:29.842634 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84297 chars]
	I1002 10:58:29.846700 2249882 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.038197 2249882 request.go:629] Waited for 191.398566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:30.038299 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:30.038313 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.038323 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.038335 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.041562 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:30.041606 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.041616 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.041624 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.041631 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.041638 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.041648 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.041655 2249882 round_trippers.go:580]     Audit-Id: dff88e0d-85eb-4e63-b7ef-f6a45d733c6d
	I1002 10:58:30.042234 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1002 10:58:30.238297 2249882 request.go:629] Waited for 195.328678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.238358 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.238369 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.238378 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.238389 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.242003 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:30.242129 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.242171 2249882 round_trippers.go:580]     Audit-Id: 617575c8-8338-4ce6-ab0d-6a7d40df04bf
	I1002 10:58:30.242194 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.242221 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.242255 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.242283 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.242308 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.242471 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:30.242902 2249882 pod_ready.go:92] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:30.242943 2249882 pod_ready.go:81] duration metric: took 396.209374ms waiting for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.242970 2249882 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.438405 2249882 request.go:629] Waited for 195.342191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:30.438523 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:30.438536 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.438546 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.438553 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.441125 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:30.441184 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.441206 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.441228 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.441281 2249882 round_trippers.go:580]     Audit-Id: 44415cd9-4ead-441f-99ac-3073cbec494f
	I1002 10:58:30.441306 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.441327 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.441347 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.441486 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"780","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1002 10:58:30.638152 2249882 request.go:629] Waited for 196.164349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.638272 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:30.638285 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.638296 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.638304 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.640944 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:30.641008 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.641023 2249882 round_trippers.go:580]     Audit-Id: 22991863-4229-470e-a5cd-caf23ee26076
	I1002 10:58:30.641030 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.641037 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.641043 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.641050 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.641080 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.641225 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:30.641634 2249882 pod_ready.go:92] pod "etcd-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:30.641652 2249882 pod_ready.go:81] duration metric: took 398.663408ms waiting for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.641673 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:30.838052 2249882 request.go:629] Waited for 196.306921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:30.838112 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:30.838121 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:30.838131 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:30.838142 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:30.840708 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:30.840775 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:30.840813 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:30.840846 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:30.840867 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:30.840895 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:30 GMT
	I1002 10:58:30.840904 2249882 round_trippers.go:580]     Audit-Id: 30047b8f-b47f-4213-982e-0a55e403a1b8
	I1002 10:58:30.840910 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:30.841057 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899833","namespace":"kube-system","uid":"fb05b79f-58ee-4097-aa20-b9721f21d29c","resourceVersion":"785","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.mirror":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.seen":"2023-10-02T10:54:43.504350548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8445 chars]
	I1002 10:58:31.037983 2249882 request.go:629] Waited for 196.342506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.038050 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.038059 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.038068 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.038076 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.041027 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.041054 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.041063 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.041076 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.041084 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.041090 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.041096 2249882 round_trippers.go:580]     Audit-Id: 1da090fa-b5fb-4c51-a46a-2160d673ff1a
	I1002 10:58:31.041163 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.041284 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:31.041680 2249882 pod_ready.go:92] pod "kube-apiserver-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:31.041697 2249882 pod_ready.go:81] duration metric: took 400.01371ms waiting for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.041710 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.238090 2249882 request.go:629] Waited for 196.313361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:31.238151 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:31.238160 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.238169 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.238176 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.241056 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.241082 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.241091 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.241105 2249882 round_trippers.go:580]     Audit-Id: 896448cd-7fc4-4ae3-a0c5-a22018374a28
	I1002 10:58:31.241112 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.241119 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.241125 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.241136 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.241299 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899833","namespace":"kube-system","uid":"92b1c97d-b38b-405b-9e51-272591b87dcf","resourceVersion":"798","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.mirror":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.seen":"2023-10-02T10:54:43.504351845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8018 chars]
	I1002 10:58:31.438209 2249882 request.go:629] Waited for 196.357923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.438294 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:31.438304 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.438314 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.438321 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.440879 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.440906 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.440914 2249882 round_trippers.go:580]     Audit-Id: 65ee20dc-99f2-4007-885f-57423986538e
	I1002 10:58:31.440921 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.440927 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.440934 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.440940 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.440947 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.441045 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:31.441443 2249882 pod_ready.go:92] pod "kube-controller-manager-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:31.441462 2249882 pod_ready.go:81] duration metric: took 399.744657ms waiting for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.441474 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.637868 2249882 request.go:629] Waited for 196.328795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:31.637930 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:31.637936 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.637948 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.637956 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.640424 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.640447 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.640456 2249882 round_trippers.go:580]     Audit-Id: e9aad646-7c6f-4900-925a-e992ec03f67a
	I1002 10:58:31.640462 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.640468 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.640474 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.640480 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.640486 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.640924 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"473","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I1002 10:58:31.837832 2249882 request.go:629] Waited for 196.383244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:31.837897 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:31.837906 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:31.837915 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:31.837931 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:31.840325 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:31.840349 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:31.840358 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:31.840365 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:31.840371 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:31 GMT
	I1002 10:58:31.840379 2249882 round_trippers.go:580]     Audit-Id: c810c7d8-2b6f-47b6-9e64-a802306c1ce0
	I1002 10:58:31.840386 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:31.840392 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:31.840648 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"fae5cedd-05b9-4641-a9c0-540d8cb0740c","resourceVersion":"540","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4461 chars]
	I1002 10:58:31.840996 2249882 pod_ready.go:92] pod "kube-proxy-76wth" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:31.841013 2249882 pod_ready.go:81] duration metric: took 399.528905ms waiting for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:31.841025 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.038425 2249882 request.go:629] Waited for 197.33235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:32.038503 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:32.038513 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.038522 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.038609 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.041473 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.041498 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.041514 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.041530 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.041537 2249882 round_trippers.go:580]     Audit-Id: 44930dcc-e3c3-4dd3-8f89-d5946c519efe
	I1002 10:58:32.041543 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.041549 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.041555 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.041688 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fjcp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"2d159cb7-69ca-4b3c-b918-b698bb157220","resourceVersion":"712","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1002 10:58:32.238448 2249882 request.go:629] Waited for 196.172579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:32.238531 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:32.238558 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.238573 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.238589 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.241126 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.241151 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.241160 2249882 round_trippers.go:580]     Audit-Id: a4daacb1-0bcd-4a68-b209-f7f065e88735
	I1002 10:58:32.241167 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.241173 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.241180 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.241186 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.241197 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.241409 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:32.241810 2249882 pod_ready.go:92] pod "kube-proxy-fjcp8" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:32.241826 2249882 pod_ready.go:81] duration metric: took 400.790018ms waiting for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.241839 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.438213 2249882 request.go:629] Waited for 196.309751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:32.438274 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:32.438285 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.438294 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.438305 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.440904 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.440960 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.440983 2249882 round_trippers.go:580]     Audit-Id: d527e2de-d1c6-4a30-8de7-f91c7cbc3fac
	I1002 10:58:32.441007 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.441043 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.441056 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.441063 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.441069 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.441182 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xnhqd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd","resourceVersion":"688","creationTimestamp":"2023-10-02T10:56:32Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1002 10:58:32.637863 2249882 request.go:629] Waited for 196.163036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:32.638022 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:32.638060 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.638084 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.638105 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.640544 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.640569 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.640579 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.640585 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.640595 2249882 round_trippers.go:580]     Audit-Id: 4e5a58c7-867a-44bd-991f-a276fb38f73f
	I1002 10:58:32.640605 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.640612 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.640622 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.641039 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m03","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d","resourceVersion":"670","creationTimestamp":"2023-10-02T10:56:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4075 chars]
	I1002 10:58:32.641420 2249882 pod_ready.go:92] pod "kube-proxy-xnhqd" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:32.641442 2249882 pod_ready.go:81] duration metric: took 399.594282ms waiting for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.641453 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:32.837740 2249882 request.go:629] Waited for 196.22116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:32.837806 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:32.837816 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:32.837833 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:32.837842 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:32.840419 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:32.840446 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:32.840455 2249882 round_trippers.go:580]     Audit-Id: 560e6793-2c2c-479c-9175-e7ef31537652
	I1002 10:58:32.840462 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:32.840469 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:32.840475 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:32.840481 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:32.840488 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:32 GMT
	I1002 10:58:32.840860 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899833","namespace":"kube-system","uid":"65999631-952f-42f1-ae73-f32996dc19fb","resourceVersion":"797","creationTimestamp":"2023-10-02T10:54:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.mirror":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.seen":"2023-10-02T10:54:35.990546729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1002 10:58:33.037641 2249882 request.go:629] Waited for 196.294777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:33.037722 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:33.037729 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.037738 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.037750 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.040455 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:33.040478 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.040487 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.040494 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.040500 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.040507 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.040513 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.040519 2249882 round_trippers.go:580]     Audit-Id: 156e629f-c6e5-4da2-87c3-221cfa28955c
	I1002 10:58:33.040642 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:33.041041 2249882 pod_ready.go:92] pod "kube-scheduler-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:33.041053 2249882 pod_ready.go:81] duration metric: took 399.589623ms waiting for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:33.041066 2249882 pod_ready.go:38] duration metric: took 3.399721193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:33.041095 2249882 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:58:33.041159 2249882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:58:33.054404 2249882 command_runner.go:130] > 1961
	I1002 10:58:33.056882 2249882 api_server.go:72] duration metric: took 3.594404466s to wait for apiserver process to appear ...
	I1002 10:58:33.056907 2249882 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:58:33.056924 2249882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:58:33.066203 2249882 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 10:58:33.066284 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1002 10:58:33.066295 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.066304 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.066313 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.067511 2249882 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 10:58:33.067538 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.067547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.067553 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.067560 2249882 round_trippers.go:580]     Content-Length: 263
	I1002 10:58:33.067566 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.067572 2249882 round_trippers.go:580]     Audit-Id: 1c32f57a-d0f7-47d9-a86b-ebad71fee90d
	I1002 10:58:33.067581 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.067588 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.067608 2249882 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1002 10:58:33.067654 2249882 api_server.go:141] control plane version: v1.28.2
	I1002 10:58:33.067669 2249882 api_server.go:131] duration metric: took 10.755757ms to wait for apiserver health ...
	I1002 10:58:33.067678 2249882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:58:33.238043 2249882 request.go:629] Waited for 170.295861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.238105 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.238115 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.238124 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.238137 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.242065 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:33.242092 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.242102 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.242114 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.242120 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.242128 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.242134 2249882 round_trippers.go:580]     Audit-Id: 5073c0e8-4c26-49db-9be5-0064777ff6e9
	I1002 10:58:33.242143 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.243115 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84297 chars]
	I1002 10:58:33.246620 2249882 system_pods.go:59] 12 kube-system pods found
	I1002 10:58:33.246650 2249882 system_pods.go:61] "coredns-5dd5756b68-s5pf5" [f72cd720-6739-45d2-a014-97b1e19d2574] Running
	I1002 10:58:33.246657 2249882 system_pods.go:61] "etcd-multinode-899833" [50fafe88-1106-4021-9c0c-7bb9d9d17ffb] Running
	I1002 10:58:33.246662 2249882 system_pods.go:61] "kindnet-jbhdj" [82532e9c-9f56-44a1-a627-ec7462b9738f] Running
	I1002 10:58:33.246667 2249882 system_pods.go:61] "kindnet-kp6fb" [260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03] Running
	I1002 10:58:33.246673 2249882 system_pods.go:61] "kindnet-lmfm5" [8790fa37-873d-4ec3-a9b3-020dcc4a8e1d] Running
	I1002 10:58:33.246678 2249882 system_pods.go:61] "kube-apiserver-multinode-899833" [fb05b79f-58ee-4097-aa20-b9721f21d29c] Running
	I1002 10:58:33.246684 2249882 system_pods.go:61] "kube-controller-manager-multinode-899833" [92b1c97d-b38b-405b-9e51-272591b87dcf] Running
	I1002 10:58:33.246689 2249882 system_pods.go:61] "kube-proxy-76wth" [675afe15-d632-48d5-8e1e-af889d799786] Running
	I1002 10:58:33.246693 2249882 system_pods.go:61] "kube-proxy-fjcp8" [2d159cb7-69ca-4b3c-b918-b698bb157220] Running
	I1002 10:58:33.246699 2249882 system_pods.go:61] "kube-proxy-xnhqd" [1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd] Running
	I1002 10:58:33.246707 2249882 system_pods.go:61] "kube-scheduler-multinode-899833" [65999631-952f-42f1-ae73-f32996dc19fb] Running
	I1002 10:58:33.246717 2249882 system_pods.go:61] "storage-provisioner" [97d5bb7f-502d-4838-a926-c613783c1588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:58:33.246729 2249882 system_pods.go:74] duration metric: took 179.04189ms to wait for pod list to return data ...
	I1002 10:58:33.246738 2249882 default_sa.go:34] waiting for default service account to be created ...
	I1002 10:58:33.438121 2249882 request.go:629] Waited for 191.305618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 10:58:33.438206 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 10:58:33.438216 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.438225 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.438233 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.440692 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:33.440712 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.440722 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.440728 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.440734 2249882 round_trippers.go:580]     Content-Length: 261
	I1002 10:58:33.440740 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.440747 2249882 round_trippers.go:580]     Audit-Id: d5fbde01-fce0-4656-8930-2bca6e4e2e53
	I1002 10:58:33.440753 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.440759 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.440805 2249882 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a059ba47-c3c4-4536-aa1c-a44f18908aeb","resourceVersion":"307","creationTimestamp":"2023-10-02T10:54:55Z"}}]}
	I1002 10:58:33.440978 2249882 default_sa.go:45] found service account: "default"
	I1002 10:58:33.440995 2249882 default_sa.go:55] duration metric: took 194.246254ms for default service account to be created ...
	I1002 10:58:33.441004 2249882 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 10:58:33.638416 2249882 request.go:629] Waited for 197.330315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.638496 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:33.638508 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.638517 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.638525 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.642309 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:33.642335 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.642344 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.642351 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.642357 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.642368 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.642381 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.642393 2249882 round_trippers.go:580]     Audit-Id: 1481a1b3-a0ff-4b20-8a75-93f75cd25398
	I1002 10:58:33.643380 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84297 chars]
	I1002 10:58:33.646923 2249882 system_pods.go:86] 12 kube-system pods found
	I1002 10:58:33.646949 2249882 system_pods.go:89] "coredns-5dd5756b68-s5pf5" [f72cd720-6739-45d2-a014-97b1e19d2574] Running
	I1002 10:58:33.646956 2249882 system_pods.go:89] "etcd-multinode-899833" [50fafe88-1106-4021-9c0c-7bb9d9d17ffb] Running
	I1002 10:58:33.646962 2249882 system_pods.go:89] "kindnet-jbhdj" [82532e9c-9f56-44a1-a627-ec7462b9738f] Running
	I1002 10:58:33.646967 2249882 system_pods.go:89] "kindnet-kp6fb" [260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03] Running
	I1002 10:58:33.646972 2249882 system_pods.go:89] "kindnet-lmfm5" [8790fa37-873d-4ec3-a9b3-020dcc4a8e1d] Running
	I1002 10:58:33.646982 2249882 system_pods.go:89] "kube-apiserver-multinode-899833" [fb05b79f-58ee-4097-aa20-b9721f21d29c] Running
	I1002 10:58:33.646993 2249882 system_pods.go:89] "kube-controller-manager-multinode-899833" [92b1c97d-b38b-405b-9e51-272591b87dcf] Running
	I1002 10:58:33.646998 2249882 system_pods.go:89] "kube-proxy-76wth" [675afe15-d632-48d5-8e1e-af889d799786] Running
	I1002 10:58:33.647005 2249882 system_pods.go:89] "kube-proxy-fjcp8" [2d159cb7-69ca-4b3c-b918-b698bb157220] Running
	I1002 10:58:33.647011 2249882 system_pods.go:89] "kube-proxy-xnhqd" [1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd] Running
	I1002 10:58:33.647022 2249882 system_pods.go:89] "kube-scheduler-multinode-899833" [65999631-952f-42f1-ae73-f32996dc19fb] Running
	I1002 10:58:33.647030 2249882 system_pods.go:89] "storage-provisioner" [97d5bb7f-502d-4838-a926-c613783c1588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:58:33.647041 2249882 system_pods.go:126] duration metric: took 206.030716ms to wait for k8s-apps to be running ...
	I1002 10:58:33.647048 2249882 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:58:33.647109 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:33.660356 2249882 system_svc.go:56] duration metric: took 13.295428ms WaitForService to wait for kubelet.
	I1002 10:58:33.660380 2249882 kubeadm.go:581] duration metric: took 4.197911011s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:58:33.660434 2249882 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:58:33.837780 2249882 request.go:629] Waited for 177.246773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:33.837850 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:33.837860 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:33.837869 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:33.837879 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:33.840809 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:33.840835 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:33.840844 2249882 round_trippers.go:580]     Audit-Id: f64907f6-5559-497c-8993-f409e00e0a68
	I1002 10:58:33.840850 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:33.840856 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:33.840863 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:33.840869 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:33.840875 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:33 GMT
	I1002 10:58:33.841074 2249882 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15863 chars]
	I1002 10:58:33.841901 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:33.841927 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:33.841938 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:33.841943 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:33.841948 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:33.841953 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:33.841957 2249882 node_conditions.go:105] duration metric: took 181.511685ms to run NodePressure ...
	I1002 10:58:33.841970 2249882 start.go:228] waiting for startup goroutines ...
	I1002 10:58:33.841977 2249882 start.go:233] waiting for cluster config update ...
	I1002 10:58:33.841984 2249882 start.go:242] writing updated cluster config ...
	I1002 10:58:33.842469 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:33.842576 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:33.846452 2249882 out.go:177] * Starting worker node multinode-899833-m02 in cluster multinode-899833
	I1002 10:58:33.848213 2249882 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:58:33.850041 2249882 out.go:177] * Pulling base image ...
	I1002 10:58:33.852150 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:58:33.852180 2249882 cache.go:57] Caching tarball of preloaded images
	I1002 10:58:33.852214 2249882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:58:33.852297 2249882 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:58:33.852310 2249882 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:58:33.852469 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:33.876215 2249882 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:58:33.876238 2249882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:58:33.876257 2249882 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:58:33.876285 2249882 start.go:365] acquiring machines lock for multinode-899833-m02: {Name:mkf7f969bdbd1303c4e28422c1c64792eb1255fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:58:33.876343 2249882 start.go:369] acquired machines lock for "multinode-899833-m02" in 40.632µs
	I1002 10:58:33.876362 2249882 start.go:96] Skipping create...Using existing machine configuration
	I1002 10:58:33.876368 2249882 fix.go:54] fixHost starting: m02
	I1002 10:58:33.876645 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m02 --format={{.State.Status}}
	I1002 10:58:33.901208 2249882 fix.go:102] recreateIfNeeded on multinode-899833-m02: state=Stopped err=<nil>
	W1002 10:58:33.901230 2249882 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 10:58:33.903659 2249882 out.go:177] * Restarting existing docker container for "multinode-899833-m02" ...
	I1002 10:58:33.905501 2249882 cli_runner.go:164] Run: docker start multinode-899833-m02
	I1002 10:58:34.267235 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m02 --format={{.State.Status}}
	I1002 10:58:34.297957 2249882 kic.go:426] container "multinode-899833-m02" state is running.
	I1002 10:58:34.298319 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:58:34.327025 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:34.327266 2249882 machine.go:88] provisioning docker machine ...
	I1002 10:58:34.327285 2249882 ubuntu.go:169] provisioning hostname "multinode-899833-m02"
	I1002 10:58:34.327336 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:34.349404 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:34.350010 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:34.350028 2249882 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899833-m02 && echo "multinode-899833-m02" | sudo tee /etc/hostname
	I1002 10:58:34.350778 2249882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 10:58:37.504175 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899833-m02
	
	I1002 10:58:37.504255 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:37.522926 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:37.523327 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:37.523351 2249882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899833-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899833-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899833-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:58:37.662495 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:58:37.662534 2249882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:58:37.662550 2249882 ubuntu.go:177] setting up certificates
	I1002 10:58:37.662561 2249882 provision.go:83] configureAuth start
	I1002 10:58:37.662631 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:58:37.682238 2249882 provision.go:138] copyHostCerts
	I1002 10:58:37.682280 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:37.682310 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:58:37.682323 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:37.682446 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:58:37.682552 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:37.682577 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:58:37.682586 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:37.682617 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:58:37.682669 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:37.682690 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:58:37.682697 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:37.682723 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:58:37.682775 2249882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.multinode-899833-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-899833-m02]
	I1002 10:58:37.985542 2249882 provision.go:172] copyRemoteCerts
	I1002 10:58:37.985610 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:58:37.985660 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.007200 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:38.108779 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:58:38.108842 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:58:38.139812 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:58:38.139930 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 10:58:38.177597 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:58:38.177658 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:58:38.208484 2249882 provision.go:86] duration metric: configureAuth took 545.903844ms
	I1002 10:58:38.208512 2249882 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:58:38.208765 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:38.208826 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.226672 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:38.227085 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:38.227097 2249882 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:58:38.374948 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:58:38.375017 2249882 ubuntu.go:71] root file system type: overlay
	I1002 10:58:38.375163 2249882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:58:38.375239 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.394214 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:38.394636 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:38.394718 2249882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:58:38.551594 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:58:38.551687 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.577216 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:38.577652 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35595 <nil> <nil>}
	I1002 10:58:38.577679 2249882 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:58:38.728314 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:58:38.728336 2249882 machine.go:91] provisioned docker machine in 4.401055708s
	I1002 10:58:38.728346 2249882 start.go:300] post-start starting for "multinode-899833-m02" (driver="docker")
	I1002 10:58:38.728357 2249882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:58:38.728421 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:58:38.728460 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.747308 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:38.848212 2249882 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:58:38.852528 2249882 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 10:58:38.852547 2249882 command_runner.go:130] > NAME="Ubuntu"
	I1002 10:58:38.852554 2249882 command_runner.go:130] > VERSION_ID="22.04"
	I1002 10:58:38.852561 2249882 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 10:58:38.852567 2249882 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 10:58:38.852574 2249882 command_runner.go:130] > ID=ubuntu
	I1002 10:58:38.852579 2249882 command_runner.go:130] > ID_LIKE=debian
	I1002 10:58:38.852585 2249882 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 10:58:38.852591 2249882 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 10:58:38.852598 2249882 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 10:58:38.852606 2249882 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 10:58:38.852614 2249882 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 10:58:38.852664 2249882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:58:38.852698 2249882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:58:38.852711 2249882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:58:38.852720 2249882 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:58:38.852732 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:58:38.852797 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:58:38.852877 2249882 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:58:38.852887 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:58:38.853000 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:58:38.863724 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:38.892327 2249882 start.go:303] post-start completed in 163.964169ms
	I1002 10:58:38.892415 2249882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:58:38.892463 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:38.910228 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:39.004975 2249882 command_runner.go:130] > 12%!
	(MISSING)I1002 10:58:39.005070 2249882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:58:39.011792 2249882 command_runner.go:130] > 173G
	I1002 10:58:39.011831 2249882 fix.go:56] fixHost completed within 5.135460901s
	I1002 10:58:39.011860 2249882 start.go:83] releasing machines lock for "multinode-899833-m02", held for 5.135508219s
	I1002 10:58:39.011949 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:58:39.036027 2249882 out.go:177] * Found network options:
	I1002 10:58:39.037973 2249882 out.go:177]   - NO_PROXY=192.168.58.2
	W1002 10:58:39.039923 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:39.039974 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 10:58:39.040067 2249882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:58:39.040128 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:39.040425 2249882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:58:39.040483 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:58:39.074154 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:39.080502 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35595 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:58:39.170914 2249882 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 10:58:39.170936 2249882 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1002 10:58:39.170945 2249882 command_runner.go:130] > Device: d0h/208d	Inode: 1836145     Links: 1
	I1002 10:58:39.170952 2249882 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:58:39.170959 2249882 command_runner.go:130] > Access: 2023-10-02 10:55:24.702948257 +0000
	I1002 10:58:39.170966 2249882 command_runner.go:130] > Modify: 2023-10-02 10:55:24.550949068 +0000
	I1002 10:58:39.170972 2249882 command_runner.go:130] > Change: 2023-10-02 10:55:24.550949068 +0000
	I1002 10:58:39.170978 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:55:24.550949068 +0000
	I1002 10:58:39.171477 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:58:39.307310 2249882 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 10:58:39.311127 2249882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:58:39.311206 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:58:39.324125 2249882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 10:58:39.324153 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:39.324188 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:39.324280 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:39.344988 2249882 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 10:58:39.346500 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:58:39.358620 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:58:39.370823 2249882 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:58:39.370944 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:58:39.383128 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:39.398232 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:58:39.410713 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:39.423803 2249882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:58:39.435850 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:58:39.450908 2249882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:58:39.461844 2249882 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 10:58:39.463500 2249882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:58:39.473416 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:39.568126 2249882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:58:39.689209 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:39.689318 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:39.689387 2249882 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:58:39.703424 2249882 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1002 10:58:39.704411 2249882 command_runner.go:130] > [Unit]
	I1002 10:58:39.704459 2249882 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 10:58:39.704488 2249882 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 10:58:39.704507 2249882 command_runner.go:130] > BindsTo=containerd.service
	I1002 10:58:39.704536 2249882 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1002 10:58:39.704558 2249882 command_runner.go:130] > Wants=network-online.target
	I1002 10:58:39.704587 2249882 command_runner.go:130] > Requires=docker.socket
	I1002 10:58:39.704611 2249882 command_runner.go:130] > StartLimitBurst=3
	I1002 10:58:39.704664 2249882 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 10:58:39.704682 2249882 command_runner.go:130] > [Service]
	I1002 10:58:39.704705 2249882 command_runner.go:130] > Type=notify
	I1002 10:58:39.704726 2249882 command_runner.go:130] > Restart=on-failure
	I1002 10:58:39.704772 2249882 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1002 10:58:39.704802 2249882 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 10:58:39.704829 2249882 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 10:58:39.704861 2249882 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 10:58:39.704889 2249882 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 10:58:39.704921 2249882 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 10:58:39.704951 2249882 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 10:58:39.704989 2249882 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 10:58:39.705015 2249882 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 10:58:39.705036 2249882 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 10:58:39.705070 2249882 command_runner.go:130] > ExecStart=
	I1002 10:58:39.705111 2249882 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1002 10:58:39.705135 2249882 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 10:58:39.705164 2249882 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 10:58:39.705194 2249882 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 10:58:39.705213 2249882 command_runner.go:130] > LimitNOFILE=infinity
	I1002 10:58:39.705243 2249882 command_runner.go:130] > LimitNPROC=infinity
	I1002 10:58:39.705278 2249882 command_runner.go:130] > LimitCORE=infinity
	I1002 10:58:39.705307 2249882 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 10:58:39.705327 2249882 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 10:58:39.705353 2249882 command_runner.go:130] > TasksMax=infinity
	I1002 10:58:39.705375 2249882 command_runner.go:130] > TimeoutStartSec=0
	I1002 10:58:39.705400 2249882 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 10:58:39.705438 2249882 command_runner.go:130] > Delegate=yes
	I1002 10:58:39.705468 2249882 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 10:58:39.705502 2249882 command_runner.go:130] > KillMode=process
	I1002 10:58:39.705542 2249882 command_runner.go:130] > [Install]
	I1002 10:58:39.705565 2249882 command_runner.go:130] > WantedBy=multi-user.target
	I1002 10:58:39.706646 2249882 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:58:39.706739 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:58:39.725791 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:39.745374 2249882 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 10:58:39.747259 2249882 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:58:39.751497 2249882 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 10:58:39.752216 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:58:39.765095 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:58:39.803541 2249882 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:58:39.927698 2249882 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:58:40.056784 2249882 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:58:40.056827 2249882 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:58:40.094536 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:40.205514 2249882 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:58:40.536851 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:40.646452 2249882 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:58:40.748708 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:40.850979 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:40.947010 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:58:40.979266 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:41.096676 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:58:41.202347 2249882 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:58:41.202470 2249882 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:58:41.207630 2249882 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 10:58:41.207702 2249882 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 10:58:41.207724 2249882 command_runner.go:130] > Device: feh/254d	Inode: 240         Links: 1
	I1002 10:58:41.207749 2249882 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1002 10:58:41.207786 2249882 command_runner.go:130] > Access: 2023-10-02 10:58:41.109893738 +0000
	I1002 10:58:41.207809 2249882 command_runner.go:130] > Modify: 2023-10-02 10:58:41.109893738 +0000
	I1002 10:58:41.207842 2249882 command_runner.go:130] > Change: 2023-10-02 10:58:41.109893738 +0000
	I1002 10:58:41.207867 2249882 command_runner.go:130] >  Birth: -
	I1002 10:58:41.208571 2249882 start.go:537] Will wait 60s for crictl version
	I1002 10:58:41.208665 2249882 ssh_runner.go:195] Run: which crictl
	I1002 10:58:41.214462 2249882 command_runner.go:130] > /usr/bin/crictl
	I1002 10:58:41.215227 2249882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:58:41.272178 2249882 command_runner.go:130] > Version:  0.1.0
	I1002 10:58:41.272489 2249882 command_runner.go:130] > RuntimeName:  docker
	I1002 10:58:41.272734 2249882 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 10:58:41.272966 2249882 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 10:58:41.275644 2249882 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:58:41.275766 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:41.304636 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:41.306632 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:41.338120 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:41.342860 2249882 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:58:41.345120 2249882 out.go:177]   - env NO_PROXY=192.168.58.2
	I1002 10:58:41.347005 2249882 cli_runner.go:164] Run: docker network inspect multinode-899833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:58:41.367378 2249882 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 10:58:41.372466 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:58:41.388337 2249882 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833 for IP: 192.168.58.3
	I1002 10:58:41.388369 2249882 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:41.388512 2249882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:58:41.388552 2249882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:58:41.388562 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:58:41.388575 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:58:41.388587 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:58:41.388599 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:58:41.388655 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:58:41.388685 2249882 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:58:41.388695 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:58:41.388719 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:58:41.388742 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:58:41.388764 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:58:41.388811 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:41.388838 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.388850 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.388861 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.389202 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:58:41.419242 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:58:41.450028 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:58:41.484475 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:58:41.515125 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:58:41.545712 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:58:41.575229 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:58:41.611525 2249882 ssh_runner.go:195] Run: openssl version
	I1002 10:58:41.618526 2249882 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 10:58:41.619381 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:58:41.631766 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.637382 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.637965 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.638033 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:58:41.646416 2249882 command_runner.go:130] > 3ec20f2e
	I1002 10:58:41.646886 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:58:41.658930 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:58:41.672626 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.678265 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.678294 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.678352 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:41.686663 2249882 command_runner.go:130] > b5213941
	I1002 10:58:41.687119 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:58:41.698231 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:58:41.709823 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.714794 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.714826 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.714888 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:58:41.723318 2249882 command_runner.go:130] > 51391683
	I1002 10:58:41.723764 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:58:41.734989 2249882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:58:41.741322 2249882 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:58:41.741352 2249882 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:58:41.741430 2249882 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:58:41.806382 2249882 command_runner.go:130] > cgroupfs
	I1002 10:58:41.807749 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:58:41.807800 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:58:41.807823 2249882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:58:41.807853 2249882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899833 NodeName:multinode-899833-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:58:41.808020 2249882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899833-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:58:41.808080 2249882 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-899833-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:58:41.808177 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:58:41.817751 2249882 command_runner.go:130] > kubeadm
	I1002 10:58:41.817811 2249882 command_runner.go:130] > kubectl
	I1002 10:58:41.817823 2249882 command_runner.go:130] > kubelet
	I1002 10:58:41.818955 2249882 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:58:41.819019 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 10:58:41.829397 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1002 10:58:41.856290 2249882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:58:41.877952 2249882 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:58:41.882345 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:58:41.895489 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:58:41.895883 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:41.895829 2249882 start.go:304] JoinCluster: &{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Aut
oPauseInterval:1m0s}
	I1002 10:58:41.895950 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 10:58:41.896017 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:58:41.913968 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:58:42.108764 2249882 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token aiz9b8.o67i19javg1wra1n --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d 
	I1002 10:58:42.108814 2249882 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:42.108852 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:58:42.109158 2249882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1002 10:58:42.109212 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:58:42.132417 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:58:42.300703 2249882 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1002 10:58:42.364419 2249882 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-lmfm5, kube-system/kube-proxy-76wth
	I1002 10:58:45.384586 2249882 command_runner.go:130] > node/multinode-899833-m02 cordoned
	I1002 10:58:45.384613 2249882 command_runner.go:130] > pod "busybox-5bc68d56bd-wzmtg" has DeletionTimestamp older than 1 seconds, skipping
	I1002 10:58:45.384621 2249882 command_runner.go:130] > node/multinode-899833-m02 drained
	I1002 10:58:45.384638 2249882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.275453513s)
	I1002 10:58:45.384650 2249882 node.go:108] successfully drained node "m02"
	I1002 10:58:45.385030 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:45.385315 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:45.385730 2249882 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1002 10:58:45.385780 2249882 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:45.385790 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:45.385799 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:45.385805 2249882 round_trippers.go:473]     Content-Type: application/json
	I1002 10:58:45.385815 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:45.389895 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:45.389917 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:45.389926 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:45.389933 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:45.389939 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:45.389945 2249882 round_trippers.go:580]     Content-Length: 171
	I1002 10:58:45.389951 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:45 GMT
	I1002 10:58:45.389963 2249882 round_trippers.go:580]     Audit-Id: 7c902483-793d-4af9-80fa-b8df7ba38d1d
	I1002 10:58:45.389969 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:45.390257 2249882 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-899833-m02","kind":"nodes","uid":"fae5cedd-05b9-4641-a9c0-540d8cb0740c"}}
	I1002 10:58:45.390295 2249882 node.go:124] successfully deleted node "m02"
	I1002 10:58:45.390303 2249882 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:45.390322 2249882 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:45.390340 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiz9b8.o67i19javg1wra1n --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m02"
	I1002 10:58:45.444997 2249882 command_runner.go:130] ! W1002 10:58:45.444554    1534 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:58:45.445622 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:58:45.506197 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:58:45.600027 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:58:45.600091 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:58:46.400816 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:58:46.400840 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:58:46.400850 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:58:46.400857 2249882 command_runner.go:130] > OS: Linux
	I1002 10:58:46.400864 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:58:46.400887 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:58:46.400900 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:58:46.400906 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:58:46.400919 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:58:46.400926 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:58:46.400938 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:58:46.400945 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:58:46.400954 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:58:46.400961 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:58:46.400971 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 10:58:46.400984 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:58:46.400994 2249882 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:58:46.401006 2249882 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 10:58:46.401016 2249882 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 10:58:46.401026 2249882 command_runner.go:130] > This node has joined the cluster:
	I1002 10:58:46.401034 2249882 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 10:58:46.401044 2249882 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 10:58:46.401052 2249882 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 10:58:46.401066 2249882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiz9b8.o67i19javg1wra1n --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m02": (1.010714732s)
	I1002 10:58:46.401086 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 10:58:46.630483 2249882 start.go:306] JoinCluster complete in 4.734646858s
	I1002 10:58:46.630512 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:58:46.630518 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:58:46.630574 2249882 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 10:58:46.635558 2249882 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 10:58:46.635580 2249882 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 10:58:46.635588 2249882 command_runner.go:130] > Device: 36h/54d	Inode: 1826972     Links: 1
	I1002 10:58:46.635596 2249882 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:58:46.635603 2249882 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:58:46.635609 2249882 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 10:58:46.635615 2249882 command_runner.go:130] > Change: 2023-10-02 10:36:11.204484217 +0000
	I1002 10:58:46.635621 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:36:11.160484379 +0000
	I1002 10:58:46.635673 2249882 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 10:58:46.635688 2249882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 10:58:46.664059 2249882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 10:58:46.927393 2249882 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:58:46.939087 2249882 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 10:58:46.942282 2249882 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 10:58:46.953445 2249882 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 10:58:46.959074 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:46.959378 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:46.959750 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 10:58:46.959765 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.959774 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.959784 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:46.962338 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:46.962361 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:46.962368 2249882 round_trippers.go:580]     Audit-Id: f4ff8b04-3c09-4bd2-a5f5-c363566ec78f
	I1002 10:58:46.962375 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:46.962381 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:46.962389 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:46.962395 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:46.962402 2249882 round_trippers.go:580]     Content-Length: 291
	I1002 10:58:46.962412 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:46 GMT
	I1002 10:58:46.962435 2249882 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b08b27fb-9d04-4b90-bfa5-b624291dfc83","resourceVersion":"813","creationTimestamp":"2023-10-02T10:54:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 10:58:46.962529 2249882 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899833" context rescaled to 1 replicas
	I1002 10:58:46.962555 2249882 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 10:58:46.966030 2249882 out.go:177] * Verifying Kubernetes components...
	I1002 10:58:46.968076 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:46.983691 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:58:46.984471 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:58:46.984754 2249882 node_ready.go:35] waiting up to 6m0s for node "multinode-899833-m02" to be "Ready" ...
	I1002 10:58:46.984829 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:46.984846 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.984855 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.984863 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:46.987514 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:46.987572 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:46.987593 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:46.987616 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:46 GMT
	I1002 10:58:46.987652 2249882 round_trippers.go:580]     Audit-Id: 15075f8d-10d2-4d49-9e76-538893f8a9b3
	I1002 10:58:46.987679 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:46.987695 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:46.987701 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:46.987846 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:46.988380 2249882 node_ready.go:49] node "multinode-899833-m02" has status "Ready":"True"
	I1002 10:58:46.988400 2249882 node_ready.go:38] duration metric: took 3.624875ms waiting for node "multinode-899833-m02" to be "Ready" ...
	I1002 10:58:46.988442 2249882 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:46.988514 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 10:58:46.988524 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.988533 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.988540 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:46.992913 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:46.992987 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:46.993010 2249882 round_trippers.go:580]     Audit-Id: f7674812-1990-4b23-b5de-2dece07163f4
	I1002 10:58:46.993035 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:46.993072 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:46.993098 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:46.993139 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:46.993164 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:46 GMT
	I1002 10:58:46.993649 2249882 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"869"},"items":[{"metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84334 chars]
	I1002 10:58:46.997385 2249882 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:46.997484 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-s5pf5
	I1002 10:58:46.997492 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:46.997501 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:46.997508 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.001241 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:47.001349 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.001372 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.001396 2249882 round_trippers.go:580]     Audit-Id: f2411a28-6f35-4925-9c47-841571754743
	I1002 10:58:47.001432 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.001463 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.001489 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.001512 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.001669 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-s5pf5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72cd720-6739-45d2-a014-97b1e19d2574","resourceVersion":"809","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c348e7d2-5346-4a57-be20-74380ca24934","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c348e7d2-5346-4a57-be20-74380ca24934\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1002 10:58:47.002321 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.002341 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.002351 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.002359 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.012329 2249882 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1002 10:58:47.012360 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.012368 2249882 round_trippers.go:580]     Audit-Id: d68bc8a6-0530-4de3-9074-57814eb42abe
	I1002 10:58:47.012375 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.012381 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.012387 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.012394 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.012428 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.012583 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.013044 2249882 pod_ready.go:92] pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.013067 2249882 pod_ready.go:81] duration metric: took 15.64872ms waiting for pod "coredns-5dd5756b68-s5pf5" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.013120 2249882 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.013219 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-899833
	I1002 10:58:47.013228 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.013236 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.013244 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.015693 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.015712 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.015720 2249882 round_trippers.go:580]     Audit-Id: fd7f3b43-e3ee-4582-b3a6-fa2a49d6b655
	I1002 10:58:47.015727 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.015758 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.015773 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.015780 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.015787 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.016260 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899833","namespace":"kube-system","uid":"50fafe88-1106-4021-9c0c-7bb9d9d17ffb","resourceVersion":"780","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.mirror":"6ea6d85a62e8c404ead7b2351d9904b6","kubernetes.io/config.seen":"2023-10-02T10:54:43.504344255Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1002 10:58:47.016781 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.016800 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.016809 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.016829 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.019765 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.019788 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.019796 2249882 round_trippers.go:580]     Audit-Id: 7dbd89ea-8458-41e7-94f8-5c7c45f603bf
	I1002 10:58:47.019803 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.019809 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.019815 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.019841 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.019856 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.020376 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.020870 2249882 pod_ready.go:92] pod "etcd-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.020914 2249882 pod_ready.go:81] duration metric: took 7.777574ms waiting for pod "etcd-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.020949 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.021041 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899833
	I1002 10:58:47.021076 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.021100 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.021123 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.023526 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.023575 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.023613 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.023637 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.023659 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.023694 2249882 round_trippers.go:580]     Audit-Id: 05f070c7-c407-4a06-8b68-d4851ce89a4b
	I1002 10:58:47.023719 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.023740 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.028617 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899833","namespace":"kube-system","uid":"fb05b79f-58ee-4097-aa20-b9721f21d29c","resourceVersion":"785","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.mirror":"6b8321b57953ac8c68ccd1f025f1ab0e","kubernetes.io/config.seen":"2023-10-02T10:54:43.504350548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8445 chars]
	I1002 10:58:47.029331 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.029379 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.029405 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.029431 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.040902 2249882 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1002 10:58:47.040974 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.040997 2249882 round_trippers.go:580]     Audit-Id: 25c39b42-f031-493d-9b6a-07a7796d125e
	I1002 10:58:47.041018 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.041054 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.041079 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.041100 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.041135 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.041695 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.042175 2249882 pod_ready.go:92] pod "kube-apiserver-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.042216 2249882 pod_ready.go:81] duration metric: took 21.245694ms waiting for pod "kube-apiserver-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.042242 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.042339 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899833
	I1002 10:58:47.042373 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.042394 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.042417 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.052124 2249882 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1002 10:58:47.052201 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.052225 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.052249 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.052284 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.052309 2249882 round_trippers.go:580]     Audit-Id: d3348536-bf24-4719-80f9-c867d42b28a8
	I1002 10:58:47.052332 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.052365 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.053512 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899833","namespace":"kube-system","uid":"92b1c97d-b38b-405b-9e51-272591b87dcf","resourceVersion":"798","creationTimestamp":"2023-10-02T10:54:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.mirror":"1a005923c2d8170d5763a799037add97","kubernetes.io/config.seen":"2023-10-02T10:54:43.504351845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8018 chars]
	I1002 10:58:47.054204 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:47.054249 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.054274 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.054296 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.058470 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:58:47.058529 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.058553 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.058575 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.058611 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.058636 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.058659 2249882 round_trippers.go:580]     Audit-Id: fa4ee297-fb15-477b-86c8-aad4a907a8d1
	I1002 10:58:47.058696 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.058880 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:47.059380 2249882 pod_ready.go:92] pod "kube-controller-manager-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:47.059432 2249882 pod_ready.go:81] duration metric: took 17.169351ms waiting for pod "kube-controller-manager-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.059458 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:47.185822 2249882 request.go:629] Waited for 126.247359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.185900 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.185913 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.185924 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.185936 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.188471 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.188496 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.188505 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.188512 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.188518 2249882 round_trippers.go:580]     Audit-Id: 5ad84008-5ddf-4525-8f99-cf53887225b9
	I1002 10:58:47.188524 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.188530 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.188537 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.188642 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"873","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1002 10:58:47.385519 2249882 request.go:629] Waited for 196.329353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.385632 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.385670 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.385698 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.385739 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.388456 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.388514 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.388528 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.388535 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.388541 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.388547 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.388554 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.388568 2249882 round_trippers.go:580]     Audit-Id: d829c825-b25f-4771-8d56-bd8f6d7dc99b
	I1002 10:58:47.389059 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:47.585919 2249882 request.go:629] Waited for 196.353689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.586023 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:47.586033 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.586043 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.586053 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.588709 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.588780 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.588803 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.588830 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.588864 2249882 round_trippers.go:580]     Audit-Id: f04e7994-d299-401d-b2c6-a73780405388
	I1002 10:58:47.588888 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.588909 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.588931 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.589323 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"873","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1002 10:58:47.785012 2249882 request.go:629] Waited for 195.149765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.785093 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:47.785113 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:47.785123 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:47.785133 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:47.787948 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:47.788005 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:47.788027 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:47 GMT
	I1002 10:58:47.788050 2249882 round_trippers.go:580]     Audit-Id: 6f6145ae-06c8-4407-a945-4f76311b2986
	I1002 10:58:47.788083 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:47.788109 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:47.788132 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:47.788153 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:47.788324 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:48.289458 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-76wth
	I1002 10:58:48.289481 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.289494 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.289502 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.292154 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.292178 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.292187 2249882 round_trippers.go:580]     Audit-Id: 45123e2d-3be3-4e36-9aa7-27961e4a25c6
	I1002 10:58:48.292194 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.292200 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.292206 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.292212 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.292219 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.292483 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-76wth","generateName":"kube-proxy-","namespace":"kube-system","uid":"675afe15-d632-48d5-8e1e-af889d799786","resourceVersion":"890","creationTimestamp":"2023-10-02T10:55:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:55:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1002 10:58:48.292979 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m02
	I1002 10:58:48.292996 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.293007 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.293015 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.295416 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.295455 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.295464 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.295472 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.295478 2249882 round_trippers.go:580]     Audit-Id: 0e1b3ed6-f37d-47f3-9cdd-d8b2760f0d4e
	I1002 10:58:48.295488 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.295495 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.295505 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.295602 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m02","uid":"7606c903-0e15-4319-b574-a2d4b3326b01","resourceVersion":"869","creationTimestamp":"2023-10-02T10:58:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:58:46Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4244 chars]
	I1002 10:58:48.295936 2249882 pod_ready.go:92] pod "kube-proxy-76wth" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:48.295955 2249882 pod_ready.go:81] duration metric: took 1.236458604s waiting for pod "kube-proxy-76wth" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.295967 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.385290 2249882 request.go:629] Waited for 89.214567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:48.385368 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fjcp8
	I1002 10:58:48.385380 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.385390 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.385398 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.388217 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.388238 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.388247 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.388253 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.388260 2249882 round_trippers.go:580]     Audit-Id: d5f9879f-2f51-4f2e-a5d8-ed9b7c81a336
	I1002 10:58:48.388274 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.388282 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.388292 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.388525 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fjcp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"2d159cb7-69ca-4b3c-b918-b698bb157220","resourceVersion":"712","creationTimestamp":"2023-10-02T10:54:56Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1002 10:58:48.585522 2249882 request.go:629] Waited for 196.337352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:48.585605 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:48.585611 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.585619 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.585633 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.588494 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.588564 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.588588 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.588610 2249882 round_trippers.go:580]     Audit-Id: fd4c27dc-924b-4f30-913f-c0c56256e5c6
	I1002 10:58:48.588632 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.588653 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.588684 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.588706 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.589047 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:48.589470 2249882 pod_ready.go:92] pod "kube-proxy-fjcp8" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:48.589488 2249882 pod_ready.go:81] duration metric: took 293.508241ms waiting for pod "kube-proxy-fjcp8" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.589500 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:48.785890 2249882 request.go:629] Waited for 196.32054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:48.785954 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhqd
	I1002 10:58:48.785960 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.785969 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.785976 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.788574 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.788601 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.788610 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.788618 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.788624 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.788630 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.788636 2249882 round_trippers.go:580]     Audit-Id: ab526cc8-cc6d-490d-954e-97496194efc9
	I1002 10:58:48.788643 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.788737 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xnhqd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a740d6d-4d91-4e2a-95c8-2f3b5d6098dd","resourceVersion":"846","creationTimestamp":"2023-10-02T10:56:32Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8016409b-bdd0-4516-ad52-9362a561fac6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8016409b-bdd0-4516-ad52-9362a561fac6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5967 chars]
	I1002 10:58:48.985574 2249882 request.go:629] Waited for 196.313492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:48.985648 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:58:48.985657 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:48.985666 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:48.985677 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:48.988277 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:48.988299 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:48.988307 2249882 round_trippers.go:580]     Audit-Id: 3691ed64-1d9a-4b07-adb3-0acd24895ded
	I1002 10:58:48.988314 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:48.988320 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:48.988326 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:48.988333 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:48.988340 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:48 GMT
	I1002 10:58:48.988459 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833-m03","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d","resourceVersion":"845","creationTimestamp":"2023-10-02T10:56:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:56:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4552 chars]
	I1002 10:58:48.988814 2249882 pod_ready.go:97] node "multinode-899833-m03" hosting pod "kube-proxy-xnhqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-899833-m03" has status "Ready":"Unknown"
	I1002 10:58:48.988837 2249882 pod_ready.go:81] duration metric: took 399.3281ms waiting for pod "kube-proxy-xnhqd" in "kube-system" namespace to be "Ready" ...
	E1002 10:58:48.988847 2249882 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-899833-m03" hosting pod "kube-proxy-xnhqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-899833-m03" has status "Ready":"Unknown"
	I1002 10:58:48.988860 2249882 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:49.185276 2249882 request.go:629] Waited for 196.328605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:49.185380 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899833
	I1002 10:58:49.185420 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:49.185442 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:49.185451 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:49.188037 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:49.188059 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:49.188067 2249882 round_trippers.go:580]     Audit-Id: aa9cfc19-50c2-4802-aa98-1f998c20dd07
	I1002 10:58:49.188074 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:49.188080 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:49.188089 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:49.188096 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:49.188104 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:49 GMT
	I1002 10:58:49.188191 2249882 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899833","namespace":"kube-system","uid":"65999631-952f-42f1-ae73-f32996dc19fb","resourceVersion":"797","creationTimestamp":"2023-10-02T10:54:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.mirror":"92cc629aea648b8185d9267d852c0f44","kubernetes.io/config.seen":"2023-10-02T10:54:35.990546729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T10:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1002 10:58:49.384884 2249882 request.go:629] Waited for 196.254194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:49.384961 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-899833
	I1002 10:58:49.384967 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:49.384983 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:49.384990 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:49.387516 2249882 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 10:58:49.387546 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:49.387555 2249882 round_trippers.go:580]     Audit-Id: ea48e7ff-281b-4d17-9e18-f4b25cc644e6
	I1002 10:58:49.387576 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:49.387583 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:49.387593 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:49.387599 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:49.387615 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:49 GMT
	I1002 10:58:49.387724 2249882 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T10:54:40Z","fieldsType":"FieldsV1","fi [truncated 5290 chars]
	I1002 10:58:49.388116 2249882 pod_ready.go:92] pod "kube-scheduler-multinode-899833" in "kube-system" namespace has status "Ready":"True"
	I1002 10:58:49.388132 2249882 pod_ready.go:81] duration metric: took 399.26241ms waiting for pod "kube-scheduler-multinode-899833" in "kube-system" namespace to be "Ready" ...
	I1002 10:58:49.388144 2249882 pod_ready.go:38] duration metric: took 2.39968877s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:58:49.388166 2249882 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:58:49.388231 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:58:49.401375 2249882 system_svc.go:56] duration metric: took 13.198969ms WaitForService to wait for kubelet.
	I1002 10:58:49.401402 2249882 kubeadm.go:581] duration metric: took 2.438820931s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:58:49.401435 2249882 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:58:49.585826 2249882 request.go:629] Waited for 184.31738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:49.585902 2249882 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 10:58:49.585912 2249882 round_trippers.go:469] Request Headers:
	I1002 10:58:49.585938 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:58:49.585951 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:58:49.589189 2249882 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 10:58:49.589215 2249882 round_trippers.go:577] Response Headers:
	I1002 10:58:49.589225 2249882 round_trippers.go:580]     Audit-Id: e791ff26-de7b-4061-a21b-35eaba37c62f
	I1002 10:58:49.589232 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:58:49.589277 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:58:49.589293 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:58:49.589300 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:58:49.589328 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:58:49 GMT
	I1002 10:58:49.589570 2249882 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"multinode-899833","uid":"d1fdf760-f7ff-47b4-8806-0559ae07fd6d","resourceVersion":"696","creationTimestamp":"2023-10-02T10:54:40Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-899833","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-899833","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T10_54_44_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16123 chars]
	I1002 10:58:49.590396 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:49.590419 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:49.590429 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:49.590439 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:49.590444 2249882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 10:58:49.590452 2249882 node_conditions.go:123] node cpu capacity is 2
	I1002 10:58:49.590457 2249882 node_conditions.go:105] duration metric: took 189.012616ms to run NodePressure ...
	I1002 10:58:49.590468 2249882 start.go:228] waiting for startup goroutines ...
	I1002 10:58:49.590493 2249882 start.go:242] writing updated cluster config ...
	I1002 10:58:49.590965 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:49.591065 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:49.595403 2249882 out.go:177] * Starting worker node multinode-899833-m03 in cluster multinode-899833
	I1002 10:58:49.597296 2249882 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:58:49.599158 2249882 out.go:177] * Pulling base image ...
	I1002 10:58:49.600939 2249882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:58:49.600981 2249882 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:58:49.601009 2249882 cache.go:57] Caching tarball of preloaded images
	I1002 10:58:49.601107 2249882 preload.go:174] Found /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 10:58:49.601125 2249882 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 10:58:49.601278 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:49.618660 2249882 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 10:58:49.618686 2249882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 10:58:49.618708 2249882 cache.go:195] Successfully downloaded all kic artifacts
	I1002 10:58:49.618741 2249882 start.go:365] acquiring machines lock for multinode-899833-m03: {Name:mk43e44e85df8dde2d3b8f9b294e7c14a9ba3c8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:58:49.618816 2249882 start.go:369] acquired machines lock for "multinode-899833-m03" in 50.83µs
	I1002 10:58:49.618840 2249882 start.go:96] Skipping create...Using existing machine configuration
	I1002 10:58:49.618849 2249882 fix.go:54] fixHost starting: m03
	I1002 10:58:49.619124 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m03 --format={{.State.Status}}
	I1002 10:58:49.639194 2249882 fix.go:102] recreateIfNeeded on multinode-899833-m03: state=Stopped err=<nil>
	W1002 10:58:49.639220 2249882 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 10:58:49.641576 2249882 out.go:177] * Restarting existing docker container for "multinode-899833-m03" ...
	I1002 10:58:49.643334 2249882 cli_runner.go:164] Run: docker start multinode-899833-m03
	I1002 10:58:50.020086 2249882 cli_runner.go:164] Run: docker container inspect multinode-899833-m03 --format={{.State.Status}}
	I1002 10:58:50.054535 2249882 kic.go:426] container "multinode-899833-m03" state is running.
	I1002 10:58:50.054910 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m03
	I1002 10:58:50.094641 2249882 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/config.json ...
	I1002 10:58:50.094918 2249882 machine.go:88] provisioning docker machine ...
	I1002 10:58:50.094939 2249882 ubuntu.go:169] provisioning hostname "multinode-899833-m03"
	I1002 10:58:50.094998 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:50.118187 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:50.118614 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:50.118628 2249882 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899833-m03 && echo "multinode-899833-m03" | sudo tee /etc/hostname
	I1002 10:58:50.119202 2249882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60994->127.0.0.1:35600: read: connection reset by peer
	I1002 10:58:53.278358 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899833-m03
	
	I1002 10:58:53.278453 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:53.302262 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:53.302681 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:53.302705 2249882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899833-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899833-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899833-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:58:53.446719 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:58:53.446751 2249882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17340-2134307/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-2134307/.minikube}
	I1002 10:58:53.446768 2249882 ubuntu.go:177] setting up certificates
	I1002 10:58:53.446777 2249882 provision.go:83] configureAuth start
	I1002 10:58:53.446841 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m03
	I1002 10:58:53.471639 2249882 provision.go:138] copyHostCerts
	I1002 10:58:53.471681 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:53.471717 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem, removing ...
	I1002 10:58:53.471731 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem
	I1002 10:58:53.471812 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.pem (1082 bytes)
	I1002 10:58:53.471895 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:53.471919 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem, removing ...
	I1002 10:58:53.471923 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem
	I1002 10:58:53.471950 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/cert.pem (1123 bytes)
	I1002 10:58:53.472028 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:53.472050 2249882 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem, removing ...
	I1002 10:58:53.472055 2249882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem
	I1002 10:58:53.472079 2249882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-2134307/.minikube/key.pem (1679 bytes)
	I1002 10:58:53.472122 2249882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem org=jenkins.multinode-899833-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-899833-m03]
	I1002 10:58:55.571320 2249882 provision.go:172] copyRemoteCerts
	I1002 10:58:55.571392 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:58:55.571441 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:55.594497 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:55.696031 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:58:55.696091 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:58:55.726224 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:58:55.726285 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 10:58:55.757341 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:58:55.757404 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 10:58:55.787148 2249882 provision.go:86] duration metric: configureAuth took 2.34035234s
	I1002 10:58:55.787178 2249882 ubuntu.go:193] setting minikube options for container-runtime
	I1002 10:58:55.787444 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:58:55.787508 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:55.812106 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:55.812705 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:55.812723 2249882 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 10:58:55.960536 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 10:58:55.960559 2249882 ubuntu.go:71] root file system type: overlay
	I1002 10:58:55.960673 2249882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 10:58:55.960745 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:55.983369 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:55.983793 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:55.983879 2249882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	Environment="NO_PROXY=192.168.58.2,192.168.58.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 10:58:56.136310 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	Environment=NO_PROXY=192.168.58.2,192.168.58.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 10:58:56.136402 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:56.155423 2249882 main.go:141] libmachine: Using SSH client type: native
	I1002 10:58:56.155824 2249882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 35600 <nil> <nil>}
	I1002 10:58:56.155848 2249882 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 10:58:57.127324 2249882 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-02 10:56:25.218624660 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-02 10:58:56.129812635 +0000
	@@ -12,6 +12,8 @@
	 Type=notify
	 Restart=on-failure
	 
	+Environment=NO_PROXY=192.168.58.2
	+Environment=NO_PROXY=192.168.58.2,192.168.58.3
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 10:58:57.127393 2249882 machine.go:91] provisioned docker machine in 7.032463229s
	I1002 10:58:57.127419 2249882 start.go:300] post-start starting for "multinode-899833-m03" (driver="docker")
	I1002 10:58:57.127447 2249882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:58:57.127549 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:58:57.127631 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.146671 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.249382 2249882 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:58:57.255390 2249882 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 10:58:57.255457 2249882 command_runner.go:130] > NAME="Ubuntu"
	I1002 10:58:57.255481 2249882 command_runner.go:130] > VERSION_ID="22.04"
	I1002 10:58:57.255494 2249882 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 10:58:57.255501 2249882 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 10:58:57.255505 2249882 command_runner.go:130] > ID=ubuntu
	I1002 10:58:57.255510 2249882 command_runner.go:130] > ID_LIKE=debian
	I1002 10:58:57.255516 2249882 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 10:58:57.255526 2249882 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 10:58:57.255537 2249882 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 10:58:57.255548 2249882 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 10:58:57.255556 2249882 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 10:58:57.255618 2249882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 10:58:57.255645 2249882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 10:58:57.255661 2249882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 10:58:57.255669 2249882 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 10:58:57.255679 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/addons for local assets ...
	I1002 10:58:57.255749 2249882 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-2134307/.minikube/files for local assets ...
	I1002 10:58:57.255848 2249882 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> 21397002.pem in /etc/ssl/certs
	I1002 10:58:57.255858 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /etc/ssl/certs/21397002.pem
	I1002 10:58:57.255973 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:58:57.268301 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:57.298852 2249882 start.go:303] post-start completed in 171.399689ms
	I1002 10:58:57.298942 2249882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:58:57.298984 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.321641 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.424519 2249882 command_runner.go:130] > 12%!
	(MISSING)I1002 10:58:57.424644 2249882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 10:58:57.430906 2249882 command_runner.go:130] > 173G
	I1002 10:58:57.431273 2249882 fix.go:56] fixHost completed within 7.812419557s
	I1002 10:58:57.431313 2249882 start.go:83] releasing machines lock for "multinode-899833-m03", held for 7.812485043s
	I1002 10:58:57.431402 2249882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m03
	I1002 10:58:57.454734 2249882 out.go:177] * Found network options:
	I1002 10:58:57.456468 2249882 out.go:177]   - NO_PROXY=192.168.58.2,192.168.58.3
	W1002 10:58:57.458502 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:57.458531 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:57.458563 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 10:58:57.458578 2249882 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 10:58:57.458649 2249882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 10:58:57.458693 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.458954 2249882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:58:57.459009 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m03
	I1002 10:58:57.481406 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.485723 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35600 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m03/id_rsa Username:docker}
	I1002 10:58:57.587457 2249882 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 10:58:57.587482 2249882 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1002 10:58:57.587491 2249882 command_runner.go:130] > Device: 100031h/1048625d	Inode: 1836318     Links: 1
	I1002 10:58:57.587498 2249882 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 10:58:57.587535 2249882 command_runner.go:130] > Access: 2023-10-02 10:58:50.713841886 +0000
	I1002 10:58:57.587550 2249882 command_runner.go:130] > Modify: 2023-10-02 10:56:55.858460341 +0000
	I1002 10:58:57.587557 2249882 command_runner.go:130] > Change: 2023-10-02 10:56:55.858460341 +0000
	I1002 10:58:57.587563 2249882 command_runner.go:130] >  Birth: 2023-10-02 10:56:55.858460341 +0000
	I1002 10:58:57.588168 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 10:58:57.729006 2249882 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 10:58:57.732193 2249882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 10:58:57.732336 2249882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:58:57.744259 2249882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 10:58:57.744286 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:57.744318 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:57.744412 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:57.766674 2249882 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 10:58:57.769429 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 10:58:57.782075 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 10:58:57.794212 2249882 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 10:58:57.794294 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 10:58:57.806609 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:57.823729 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 10:58:57.835229 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 10:58:57.848995 2249882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:58:57.861777 2249882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 10:58:57.873618 2249882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:58:57.882659 2249882 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 10:58:57.883900 2249882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:58:57.893914 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:58.010215 2249882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 10:58:58.120210 2249882 start.go:469] detecting cgroup driver to use...
	I1002 10:58:58.120253 2249882 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 10:58:58.120319 2249882 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 10:58:58.137162 2249882 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1002 10:58:58.138402 2249882 command_runner.go:130] > [Unit]
	I1002 10:58:58.138422 2249882 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 10:58:58.138430 2249882 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 10:58:58.138436 2249882 command_runner.go:130] > BindsTo=containerd.service
	I1002 10:58:58.138443 2249882 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1002 10:58:58.138449 2249882 command_runner.go:130] > Wants=network-online.target
	I1002 10:58:58.138459 2249882 command_runner.go:130] > Requires=docker.socket
	I1002 10:58:58.138465 2249882 command_runner.go:130] > StartLimitBurst=3
	I1002 10:58:58.138472 2249882 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 10:58:58.138477 2249882 command_runner.go:130] > [Service]
	I1002 10:58:58.138482 2249882 command_runner.go:130] > Type=notify
	I1002 10:58:58.138493 2249882 command_runner.go:130] > Restart=on-failure
	I1002 10:58:58.138499 2249882 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1002 10:58:58.138506 2249882 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2,192.168.58.3
	I1002 10:58:58.138521 2249882 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 10:58:58.138530 2249882 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 10:58:58.138548 2249882 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 10:58:58.138564 2249882 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 10:58:58.138573 2249882 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 10:58:58.138584 2249882 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 10:58:58.138596 2249882 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 10:58:58.138608 2249882 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 10:58:58.138616 2249882 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 10:58:58.138621 2249882 command_runner.go:130] > ExecStart=
	I1002 10:58:58.138639 2249882 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1002 10:58:58.138652 2249882 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 10:58:58.138662 2249882 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 10:58:58.138674 2249882 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 10:58:58.138684 2249882 command_runner.go:130] > LimitNOFILE=infinity
	I1002 10:58:58.138693 2249882 command_runner.go:130] > LimitNPROC=infinity
	I1002 10:58:58.138698 2249882 command_runner.go:130] > LimitCORE=infinity
	I1002 10:58:58.138705 2249882 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 10:58:58.138712 2249882 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 10:58:58.138720 2249882 command_runner.go:130] > TasksMax=infinity
	I1002 10:58:58.138725 2249882 command_runner.go:130] > TimeoutStartSec=0
	I1002 10:58:58.138737 2249882 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 10:58:58.138748 2249882 command_runner.go:130] > Delegate=yes
	I1002 10:58:58.138759 2249882 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 10:58:58.138768 2249882 command_runner.go:130] > KillMode=process
	I1002 10:58:58.138772 2249882 command_runner.go:130] > [Install]
	I1002 10:58:58.138779 2249882 command_runner.go:130] > WantedBy=multi-user.target
	I1002 10:58:58.141317 2249882 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 10:58:58.141385 2249882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 10:58:58.159230 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:58:58.194592 2249882 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 10:58:58.196408 2249882 ssh_runner.go:195] Run: which cri-dockerd
	I1002 10:58:58.200640 2249882 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 10:58:58.201708 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 10:58:58.216309 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 10:58:58.240331 2249882 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 10:58:58.387285 2249882 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 10:58:58.501982 2249882 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 10:58:58.502075 2249882 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 10:58:58.528652 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:58.632108 2249882 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 10:58:58.954354 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:59.066172 2249882 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 10:58:59.170119 2249882 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 10:58:59.274821 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:59.384726 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 10:58:59.410124 2249882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:58:59.536868 2249882 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 10:58:59.639691 2249882 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 10:58:59.639810 2249882 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 10:58:59.644497 2249882 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 10:58:59.644522 2249882 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 10:58:59.644531 2249882 command_runner.go:130] > Device: 10003bh/1048635d	Inode: 279         Links: 1
	I1002 10:58:59.644554 2249882 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1002 10:58:59.644564 2249882 command_runner.go:130] > Access: 2023-10-02 10:58:59.553794138 +0000
	I1002 10:58:59.644590 2249882 command_runner.go:130] > Modify: 2023-10-02 10:58:59.549794160 +0000
	I1002 10:58:59.644604 2249882 command_runner.go:130] > Change: 2023-10-02 10:58:59.549794160 +0000
	I1002 10:58:59.644610 2249882 command_runner.go:130] >  Birth: -
	I1002 10:58:59.644895 2249882 start.go:537] Will wait 60s for crictl version
	I1002 10:58:59.644980 2249882 ssh_runner.go:195] Run: which crictl
	I1002 10:58:59.649583 2249882 command_runner.go:130] > /usr/bin/crictl
	I1002 10:58:59.650983 2249882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:58:59.709810 2249882 command_runner.go:130] > Version:  0.1.0
	I1002 10:58:59.709833 2249882 command_runner.go:130] > RuntimeName:  docker
	I1002 10:58:59.709840 2249882 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 10:58:59.709846 2249882 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 10:58:59.712494 2249882 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 10:58:59.712586 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:59.740158 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:59.741953 2249882 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 10:58:59.769637 2249882 command_runner.go:130] > 24.0.6
	I1002 10:58:59.775658 2249882 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 10:58:59.777376 2249882 out.go:177]   - env NO_PROXY=192.168.58.2
	I1002 10:58:59.779344 2249882 out.go:177]   - env NO_PROXY=192.168.58.2,192.168.58.3
	I1002 10:58:59.781180 2249882 cli_runner.go:164] Run: docker network inspect multinode-899833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 10:58:59.799195 2249882 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 10:58:59.803631 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:58:59.816424 2249882 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833 for IP: 192.168.58.4
	I1002 10:58:59.816458 2249882 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1d43a94e604cdd7d897bd7b1078cd14b38f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:58:59.816617 2249882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key
	I1002 10:58:59.816663 2249882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key
	I1002 10:58:59.816677 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:58:59.816693 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:58:59.816709 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:58:59.816720 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:58:59.816780 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem (1338 bytes)
	W1002 10:58:59.816813 2249882 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700_empty.pem, impossibly tiny 0 bytes
	I1002 10:58:59.816825 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 10:58:59.816850 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:58:59.816878 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:58:59.816904 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/certs/key.pem (1679 bytes)
	I1002 10:58:59.816954 2249882 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem (1708 bytes)
	I1002 10:58:59.816987 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem -> /usr/share/ca-certificates/2139700.pem
	I1002 10:58:59.817003 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem -> /usr/share/ca-certificates/21397002.pem
	I1002 10:58:59.817014 2249882 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:58:59.817382 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:58:59.847492 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 10:58:59.877398 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:58:59.906495 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 10:58:59.937639 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/certs/2139700.pem --> /usr/share/ca-certificates/2139700.pem (1338 bytes)
	I1002 10:58:59.966131 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/ssl/certs/21397002.pem --> /usr/share/ca-certificates/21397002.pem (1708 bytes)
	I1002 10:58:59.995972 2249882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:59:00.046315 2249882 ssh_runner.go:195] Run: openssl version
	I1002 10:59:00.056919 2249882 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 10:59:00.057417 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2139700.pem && ln -fs /usr/share/ca-certificates/2139700.pem /etc/ssl/certs/2139700.pem"
	I1002 10:59:00.076454 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.093299 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.093344 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:41 /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.094118 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2139700.pem
	I1002 10:59:00.109135 2249882 command_runner.go:130] > 51391683
	I1002 10:59:00.110391 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2139700.pem /etc/ssl/certs/51391683.0"
	I1002 10:59:00.127665 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21397002.pem && ln -fs /usr/share/ca-certificates/21397002.pem /etc/ssl/certs/21397002.pem"
	I1002 10:59:00.146695 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.152933 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.153345 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:41 /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.153418 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21397002.pem
	I1002 10:59:00.163484 2249882 command_runner.go:130] > 3ec20f2e
	I1002 10:59:00.164022 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21397002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:59:00.179346 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:59:00.193887 2249882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.199319 2249882 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.199545 2249882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.199613 2249882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:59:00.209021 2249882 command_runner.go:130] > b5213941
	I1002 10:59:00.209803 2249882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:59:00.222149 2249882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:59:00.227129 2249882 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:59:00.227430 2249882 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:59:00.227533 2249882 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 10:59:00.314770 2249882 command_runner.go:130] > cgroupfs
	I1002 10:59:00.316635 2249882 cni.go:84] Creating CNI manager for ""
	I1002 10:59:00.316655 2249882 cni.go:136] 3 nodes found, recommending kindnet
	I1002 10:59:00.316664 2249882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:59:00.316683 2249882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899833 NodeName:multinode-899833-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:59:00.316802 2249882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899833-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:59:00.316857 2249882 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-899833-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:59:00.316925 2249882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:59:00.327072 2249882 command_runner.go:130] > kubeadm
	I1002 10:59:00.327144 2249882 command_runner.go:130] > kubectl
	I1002 10:59:00.327164 2249882 command_runner.go:130] > kubelet
	I1002 10:59:00.328304 2249882 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:59:00.328375 2249882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 10:59:00.340479 2249882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1002 10:59:00.363317 2249882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:59:00.385208 2249882 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 10:59:00.390069 2249882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:59:00.404029 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:59:00.404322 2249882 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:59:00.404375 2249882 start.go:304] JoinCluster: &{Name:multinode-899833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-899833 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:fals
e logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPause
Interval:1m0s}
	I1002 10:59:00.404529 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 10:59:00.404625 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:59:00.423690 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:59:00.625775 2249882 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d 
	I1002 10:59:00.625833 2249882 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:00.625876 2249882 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:59:00.626270 2249882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1002 10:59:00.626336 2249882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:59:00.649061 2249882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35590 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:59:00.819876 2249882 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1002 10:59:00.888002 2249882 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-jbhdj, kube-system/kube-proxy-xnhqd
	I1002 10:59:03.912302 2249882 command_runner.go:130] > node/multinode-899833-m03 cordoned
	I1002 10:59:03.912328 2249882 command_runner.go:130] > pod "busybox-5bc68d56bd-zwsch" has DeletionTimestamp older than 1 seconds, skipping
	I1002 10:59:03.912336 2249882 command_runner.go:130] > node/multinode-899833-m03 drained
	I1002 10:59:03.912358 2249882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-899833-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.286062073s)
	I1002 10:59:03.912374 2249882 node.go:108] successfully drained node "m03"
	I1002 10:59:03.912725 2249882 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:59:03.912988 2249882 kapi.go:59] client config for multinode-899833: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/multinode-899833/client.key", CAFile:"/home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169df20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:59:03.913360 2249882 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1002 10:59:03.913416 2249882 round_trippers.go:463] DELETE https://192.168.58.2:8443/api/v1/nodes/multinode-899833-m03
	I1002 10:59:03.913426 2249882 round_trippers.go:469] Request Headers:
	I1002 10:59:03.913436 2249882 round_trippers.go:473]     Accept: application/json, */*
	I1002 10:59:03.913443 2249882 round_trippers.go:473]     Content-Type: application/json
	I1002 10:59:03.913452 2249882 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 10:59:03.917697 2249882 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 10:59:03.917719 2249882 round_trippers.go:577] Response Headers:
	I1002 10:59:03.917727 2249882 round_trippers.go:580]     Content-Length: 171
	I1002 10:59:03.917733 2249882 round_trippers.go:580]     Date: Mon, 02 Oct 2023 10:59:03 GMT
	I1002 10:59:03.917740 2249882 round_trippers.go:580]     Audit-Id: 20caf58d-57b7-4fb5-a6db-64bbd3a7be34
	I1002 10:59:03.917746 2249882 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 10:59:03.917753 2249882 round_trippers.go:580]     Content-Type: application/json
	I1002 10:59:03.917766 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ddb4fd53-4a0c-4419-8e2b-38339ca3ea91
	I1002 10:59:03.917773 2249882 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 474a9fe6-6754-4c0d-99f1-7996a518a3f7
	I1002 10:59:03.917971 2249882 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-899833-m03","kind":"nodes","uid":"332112e7-39bc-44d1-86bd-88e1074e5d8d"}}
	I1002 10:59:03.918048 2249882 node.go:124] successfully deleted node "m03"
	I1002 10:59:03.918074 2249882 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:03.918130 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:03.918168 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:03.968197 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:04.024868 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:04.024891 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:04.024898 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:04.024904 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:04.024911 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:04.024918 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:04.024925 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:04.024937 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:04.024943 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:04.024953 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:04.024962 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:04.024969 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:04.183562 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:04.183589 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 10:59:04.209203 2249882 command_runner.go:130] ! W1002 10:59:03.967644    1719 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:04.209229 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:04.209247 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:04.209282 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:04.209292 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:04.209309 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:04.209319 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 10:59:04.209375 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:03.967644    1719 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:04.209394 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:04.209407 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:04.262481 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:04.262556 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:04.262611 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:04.262660 2249882 retry.go:31] will retry after 11.616103796s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:03.967644    1719 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:15.879205 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:15.879290 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:15.922558 2249882 command_runner.go:130] ! W1002 10:59:15.922068    2168 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:15.922657 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:15.981705 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:16.074566 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:16.074592 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:16.128542 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:16.128569 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.131598 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:16.131621 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:16.131629 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:16.131635 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:16.131642 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:16.131649 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:16.131656 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:16.131662 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:16.131669 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:16.131675 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:16.131682 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:16.131689 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:16.131695 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:16.131704 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:16.131713 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1002 10:59:16.131762 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:15.922068    2168 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.131779 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:16.131793 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:16.204334 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:16.204356 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.204411 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:16.204427 2249882 retry.go:31] will retry after 20.034972791s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:15.922068    2168 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.239580 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:36.239639 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:36.297751 2249882 command_runner.go:130] ! W1002 10:59:36.297387    2354 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:36.298313 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:36.353189 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:36.437536 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:36.437559 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:36.486158 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:36.486181 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.492399 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:36.492424 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:36.492432 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:36.492438 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:36.492449 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:36.492463 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:36.492472 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:36.492478 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:36.492495 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:36.492502 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:36.492523 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:36.492530 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:36.492541 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:36.492548 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:36.492557 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1002 10:59:36.492607 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:36.297387    2354 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.492623 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:36.492637 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:36.552712 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:36.552738 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.552760 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:36.552781 2249882 retry.go:31] will retry after 14.747204609s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:36.297387    2354 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.303178 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 10:59:51.303233 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 10:59:51.346714 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 10:59:51.417788 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 10:59:51.417815 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 10:59:51.417822 2249882 command_runner.go:130] > OS: Linux
	I1002 10:59:51.417828 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 10:59:51.417836 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 10:59:51.417842 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 10:59:51.417849 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 10:59:51.417855 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 10:59:51.417863 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 10:59:51.417872 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 10:59:51.417878 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 10:59:51.417884 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 10:59:51.539310 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 10:59:51.539334 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 10:59:51.563625 2249882 command_runner.go:130] ! W1002 10:59:51.346170    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 10:59:51.563649 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 10:59:51.563666 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 10:59:51.563673 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 10:59:51.563682 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 10:59:51.563699 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 10:59:51.563712 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 10:59:51.563763 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:51.346170    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.563779 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 10:59:51.563792 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 10:59:51.608541 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 10:59:51.608567 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.608595 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 10:59:51.608615 2249882 retry.go:31] will retry after 29.16686618s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 10:59:51.346170    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:20.778818 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:00:20.778874 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 11:00:20.826840 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:00:20.886333 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:00:20.886358 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:00:20.886365 2249882 command_runner.go:130] > OS: Linux
	I1002 11:00:20.886372 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 11:00:20.886383 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 11:00:20.886390 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 11:00:20.886401 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 11:00:20.886408 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 11:00:20.886414 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 11:00:20.886423 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 11:00:20.886437 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 11:00:20.886444 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 11:00:20.997494 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:00:20.997516 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:00:21.025214 2249882 command_runner.go:130] ! W1002 11:00:20.826254    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 11:00:21.025282 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:00:21.025300 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:00:21.025314 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:00:21.025324 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:00:21.025342 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 11:00:21.025357 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 11:00:21.025414 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:20.826254    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:21.025430 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 11:00:21.025444 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 11:00:21.074836 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 11:00:21.074862 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:21.074886 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:21.074902 2249882 retry.go:31] will retry after 33.544601599s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:20.826254    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.621357 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:00:54.621429 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 11:00:54.672956 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:00:54.735166 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:00:54.735200 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:00:54.735241 2249882 command_runner.go:130] > OS: Linux
	I1002 11:00:54.735249 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 11:00:54.735256 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 11:00:54.735263 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 11:00:54.735270 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 11:00:54.735276 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 11:00:54.735282 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 11:00:54.735289 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 11:00:54.735295 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 11:00:54.735302 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 11:00:54.857318 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:00:54.857342 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:00:54.885967 2249882 command_runner.go:130] ! W1002 11:00:54.672262    3023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 11:00:54.885991 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:00:54.886008 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:00:54.886018 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:00:54.886027 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:00:54.886042 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 11:00:54.886054 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 11:00:54.886097 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:54.672262    3023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.886110 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 11:00:54.886122 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 11:00:54.937392 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 11:00:54.937417 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.937440 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:00:54.937457 2249882 retry.go:31] will retry after 35.215075844s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:00:54.672262    3023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.153729 2249882 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:01:30.153833 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03"
	I1002 11:01:30.203411 2249882 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:01:30.260517 2249882 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 11:01:30.260545 2249882 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 11:01:30.260553 2249882 command_runner.go:130] > OS: Linux
	I1002 11:01:30.260561 2249882 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 11:01:30.260570 2249882 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 11:01:30.260577 2249882 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 11:01:30.260583 2249882 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 11:01:30.260592 2249882 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 11:01:30.260608 2249882 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 11:01:30.260620 2249882 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 11:01:30.260629 2249882 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 11:01:30.260637 2249882 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 11:01:30.376995 2249882 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:01:30.377035 2249882 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:01:30.405361 2249882 command_runner.go:130] ! W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 11:01:30.405392 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:01:30.405411 2249882 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 11:01:30.405421 2249882 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:01:30.405431 2249882 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:01:30.405450 2249882 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1002 11:01:30.405462 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1002 11:01:30.405513 2249882 start.go:327] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.405526 2249882 start.go:330] resetting worker node "m03" before attempting to rejoin cluster...
	I1002 11:01:30.405540 2249882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force"
	I1002 11:01:30.447767 2249882 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1002 11:01:30.447795 2249882 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.451281 2249882 start.go:332] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1002 11:01:30.451330 2249882 start.go:306] JoinCluster complete in 2m30.046955297s
	I1002 11:01:30.454789 2249882 out.go:177] 
	W1002 11:01:30.456792 2249882 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7b1in4.1xprxhp9y8081jx6 --discovery-token-ca-cert-hash sha256:224fd2821bcae6cac454d937e803319543cceeb9da69e20ca575f0a6d7be306d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899833-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1045-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1002 11:01:30.202946    3312 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-899833-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 11:01:30.456855 2249882 out.go:239] * 
	W1002 11:01:30.457800 2249882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:01:30.459957 2249882 out.go:177] 
	
	* 
	* ==> Docker <==
	* Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Start docker client with request timeout 0s"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Loaded network plugin cni"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Docker Info: &{ID:e57c56d4-4e04-4213-b326-d9b6008115c0 Containers:20 ContainersRunning:0 ContainersPaused:0 ContainersStopped:20 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02T10:57:34.282805556Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ub
untu 22.04.3 LTS (containerized) OSVersion:22.04 OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0x4000474bd0 NCPU:2 MemTotal:8215040000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:multinode-899833 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense:
DefaultAddressPools:[] Warnings:[]}"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 02 10:57:34 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:34Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 02 10:57:34 multinode-899833 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-s5pf5_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7f68c6c1b9a974b8a0c30ef12ee8f120c3dfa14c28d6feb70ea36fda6ae1ebf9\""
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=error msg="Failed to retrieve checkpoint for sandbox 659c426001740c18a13ded874cbe949dfa246a67ca53dd90678f88b4caafa057: checkpoint is not found"
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5bc68d56bd-n7gl6_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"491b6e98f47b25afbbb1c380f17e51f1b7efd1d9b9eaa3ee84f973ca4a6e8850\""
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/479a610a02710f211dabe7245c73da3560afe19e917979f6c2b847c402f91aa5/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2dac5975e7a976feeb76de45ae9ea45762ddafb9fd5d95a9e4ff5d6a399fbfa/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6a8b9b1a1209d794f2e6595dab443b73105280d72dca108c711ecbf3494d7596/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 10:57:48 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/33646513f5e51b13a53b2b11275381f58d1827c402299ec99db8cd19197dd852/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	Oct 02 10:57:49 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-s5pf5_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7f68c6c1b9a974b8a0c30ef12ee8f120c3dfa14c28d6feb70ea36fda6ae1ebf9\""
	Oct 02 10:57:53 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 02 10:57:54 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62782a095337862f2d49787ff144ac99559fa617623231c6555e51404973ac89/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 10:57:54 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b84a049cae7c692f9348918379ba1cbf4654c80fb9472d8f5bc90fc692ddbd83/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	Oct 02 10:57:54 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5d358fb9f4c99540b6c65e8a508fb547486aed1b4280f859262b8d76af869f27/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 10:57:54 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b8ec693dfd8ccad3cb879b64d76946a43643d9cc08213b5ada2c53272d23411/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 10:57:54 multinode-899833 cri-dockerd[1104]: time="2023-10-02T10:57:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7c65eb62ee2451e7e5d7983c74bc08a8dba51e9c202a63b2387c3163e3833835/resolv.conf as [nameserver 192.168.58.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 10:58:25 multinode-899833 dockerd[876]: time="2023-10-02T10:58:25.056538711Z" level=info msg="ignoring event" container=882271c4c708f20e5c89dff972d9444d2a01f38fe752a8d8f75caa16a021f92b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	283ddd9fc6914       ba04bb24b9575                                                                                         2 minutes ago       Running             storage-provisioner       2                   b84a049cae7c6       storage-provisioner
	70885cc6b8ff9       97e04611ad434                                                                                         3 minutes ago       Running             coredns                   2                   7b8ec693dfd8c       coredns-5dd5756b68-s5pf5
	ad76da9a52393       04b4eaa3d3db8                                                                                         3 minutes ago       Running             kindnet-cni               1                   7c65eb62ee245       kindnet-kp6fb
	d5f84bfed2c94       89a35e2ebb6b9                                                                                         3 minutes ago       Running             busybox                   1                   5d358fb9f4c99       busybox-5bc68d56bd-n7gl6
	882271c4c708f       ba04bb24b9575                                                                                         3 minutes ago       Exited              storage-provisioner       1                   b84a049cae7c6       storage-provisioner
	f5440d8f11db8       7da62c127fc0f                                                                                         3 minutes ago       Running             kube-proxy                1                   62782a0953378       kube-proxy-fjcp8
	318bddb38652c       64fc40cee3716                                                                                         3 minutes ago       Running             kube-scheduler            1                   33646513f5e51       kube-scheduler-multinode-899833
	7976a4a982baa       9cdd6470f48c8                                                                                         3 minutes ago       Running             etcd                      1                   6a8b9b1a1209d       etcd-multinode-899833
	dd9314a96fe37       30bb499447fe1                                                                                         3 minutes ago       Running             kube-apiserver            1                   c2dac5975e7a9       kube-apiserver-multinode-899833
	4c69369fa41ba       89d57b83c1786                                                                                         3 minutes ago       Running             kube-controller-manager   1                   479a610a02710       kube-controller-manager-multinode-899833
	1238dd3cb8a99       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Exited              busybox                   0                   491b6e98f47b2       busybox-5bc68d56bd-n7gl6
	f0ac914e78fcf       97e04611ad434                                                                                         6 minutes ago       Exited              coredns                   1                   7f68c6c1b9a97       coredns-5dd5756b68-s5pf5
	65189e7d31edb       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              6 minutes ago       Exited              kindnet-cni               0                   584b6ab2c0e01       kindnet-kp6fb
	7264383872ff2       7da62c127fc0f                                                                                         6 minutes ago       Exited              kube-proxy                0                   d027a8a33607b       kube-proxy-fjcp8
	a82e598287961       64fc40cee3716                                                                                         6 minutes ago       Exited              kube-scheduler            0                   832b4901b7229       kube-scheduler-multinode-899833
	1bdae6fab8f9d       30bb499447fe1                                                                                         6 minutes ago       Exited              kube-apiserver            0                   68f88034ce87a       kube-apiserver-multinode-899833
	0beca8ac2d3bd       9cdd6470f48c8                                                                                         6 minutes ago       Exited              etcd                      0                   0db8e2ef374cb       etcd-multinode-899833
	c595b0a59f0ec       89d57b83c1786                                                                                         6 minutes ago       Exited              kube-controller-manager   0                   09f490c928ae6       kube-controller-manager-multinode-899833
	
	* 
	* ==> coredns [70885cc6b8ff] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39458 - 44364 "HINFO IN 7297839892083527008.8510762188677612944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022694335s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [f0ac914e78fc] <==
	* [INFO] 10.244.1.2:36241 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001138209s
	[INFO] 10.244.1.2:60814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086835s
	[INFO] 10.244.1.2:43822 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083183s
	[INFO] 10.244.1.2:43599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001074226s
	[INFO] 10.244.1.2:41637 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081451s
	[INFO] 10.244.1.2:48541 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074986s
	[INFO] 10.244.1.2:34250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077555s
	[INFO] 10.244.0.3:53086 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004942s
	[INFO] 10.244.0.3:36435 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084685s
	[INFO] 10.244.0.3:38545 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069801s
	[INFO] 10.244.0.3:56007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072738s
	[INFO] 10.244.1.2:36290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109989s
	[INFO] 10.244.1.2:36732 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100078s
	[INFO] 10.244.1.2:55627 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105714s
	[INFO] 10.244.1.2:55863 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063917s
	[INFO] 10.244.0.3:59715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080197s
	[INFO] 10.244.0.3:47997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113493s
	[INFO] 10.244.0.3:48162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008905s
	[INFO] 10.244.0.3:34704 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071901s
	[INFO] 10.244.1.2:58129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085242s
	[INFO] 10.244.1.2:52505 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000060455s
	[INFO] 10.244.1.2:38963 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000050043s
	[INFO] 10.244.1.2:50247 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065189s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-899833
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-899833
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=multinode-899833
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T10_54_44_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:54:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899833
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:01:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:57:53 +0000   Mon, 02 Oct 2023 10:54:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:57:53 +0000   Mon, 02 Oct 2023 10:54:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:57:53 +0000   Mon, 02 Oct 2023 10:54:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:57:53 +0000   Mon, 02 Oct 2023 10:54:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-899833
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebae3de867834c549e9b4ee0c29e70f1
	  System UUID:                fc19645d-577c-465b-8488-e2ba3ba2b6bc
	  Boot ID:                    8f181a8e-95ee-4bd9-9704-e77c1ff4607b
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-n7gl6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 coredns-5dd5756b68-s5pf5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m37s
	  kube-system                 etcd-multinode-899833                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m50s
	  kube-system                 kindnet-kp6fb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m37s
	  kube-system                 kube-apiserver-multinode-899833             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-controller-manager-multinode-899833    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-proxy-fjcp8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 kube-scheduler-multinode-899833             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m36s                  kube-proxy       
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeReady                6m50s                  kubelet          Node multinode-899833 status is now: NodeReady
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    6m50s                  kubelet          Node multinode-899833 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s                  kubelet          Node multinode-899833 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m50s                  kubelet          Node multinode-899833 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m50s                  kubelet          Node multinode-899833 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m38s                  node-controller  Node multinode-899833 event: Registered Node multinode-899833 in Controller
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s (x8 over 3m45s)  kubelet          Node multinode-899833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x8 over 3m45s)  kubelet          Node multinode-899833 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x7 over 3m45s)  kubelet          Node multinode-899833 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node multinode-899833 event: Registered Node multinode-899833 in Controller
	
	
	Name:               multinode-899833-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-899833-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:58:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899833-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:01:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:58:46 +0000   Mon, 02 Oct 2023 10:58:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:58:46 +0000   Mon, 02 Oct 2023 10:58:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:58:46 +0000   Mon, 02 Oct 2023 10:58:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:58:46 +0000   Mon, 02 Oct 2023 10:58:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-899833-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 cff9c11e3e774de292d51823cc77d7ff
	  System UUID:                d1e04b8a-2747-45d5-98b5-18d47f61564e
	  Boot ID:                    8f181a8e-95ee-4bd9-9704-e77c1ff4607b
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f9ffb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 kindnet-lmfm5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m3s
	  kube-system                 kube-proxy-76wth            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  Starting                 6m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x5 over 6m5s)    kubelet          Node multinode-899833-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x5 over 6m5s)    kubelet          Node multinode-899833-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x5 over 6m5s)    kubelet          Node multinode-899833-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m3s                   kubelet          Node multinode-899833-m02 status is now: NodeReady
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node multinode-899833-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node multinode-899833-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node multinode-899833-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m47s                  kubelet          Node multinode-899833-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m43s                  node-controller  Node multinode-899833-m02 event: Registered Node multinode-899833-m02 in Controller
	
	
	Name:               multinode-899833-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-899833-m03
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:59:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899833-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:01:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:59:14 +0000   Mon, 02 Oct 2023 10:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:59:14 +0000   Mon, 02 Oct 2023 10:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:59:14 +0000   Mon, 02 Oct 2023 10:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:59:14 +0000   Mon, 02 Oct 2023 10:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-899833-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 50ea88b426d4403d9471f5bc000b8459
	  System UUID:                7972e5d0-c5f6-476f-bccd-8e4400f3a092
	  Boot ID:                    8f181a8e-95ee-4bd9-9704-e77c1ff4607b
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jbhdj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m1s
	  kube-system                 kube-proxy-xnhqd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m59s                  kube-proxy  
	  Normal  Starting                 2m20s                  kube-proxy  
	  Normal  Starting                 4m32s                  kube-proxy  
	  Normal  Starting                 5m1s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m1s (x2 over 5m1s)    kubelet     Node multinode-899833-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x2 over 5m1s)    kubelet     Node multinode-899833-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x2 over 5m1s)    kubelet     Node multinode-899833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m1s                   kubelet     Node multinode-899833-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     4m34s (x2 over 4m34s)  kubelet     Node multinode-899833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m34s (x2 over 4m34s)  kubelet     Node multinode-899833-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m34s                  kubelet     Node multinode-899833-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m34s (x2 over 4m34s)  kubelet     Node multinode-899833-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m34s                  kubelet     Starting kubelet.
	  Normal  Starting                 2m42s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m42s)  kubelet     Node multinode-899833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m42s)  kubelet     Node multinode-899833-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m42s)  kubelet     Node multinode-899833-m03 status is now: NodeHasNoDiskPressure
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] '8f6b3b0000000000'
	[  +0.000705] FS-Cache: N-cookie c=000000ae [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000650f42e0
	[  +0.001051] FS-Cache: N-key=[8] '8f6b3b0000000000'
	[  +0.002955] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=000000a8 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=00000000590f981e
	[  +0.001043] FS-Cache: O-key=[8] '8f6b3b0000000000'
	[  +0.000711] FS-Cache: N-cookie c=000000af [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000119233c1
	[  +0.001043] FS-Cache: N-key=[8] '8f6b3b0000000000'
	[Oct 2 10:45] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=000000a6 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=00000000df3c7dbb
	[  +0.001149] FS-Cache: O-key=[8] '8e6b3b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=000000b1 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000650f42e0
	[  +0.001057] FS-Cache: N-key=[8] '8e6b3b0000000000'
	[  +0.286574] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=000000ab [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000954] FS-Cache: O-cookie d=00000000a92d3341{9p.inode} n=000000000fcfab79
	[  +0.001077] FS-Cache: O-key=[8] '946b3b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=000000b2 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000a92d3341{9p.inode} n=00000000ff2999f7
	[  +0.001033] FS-Cache: N-key=[8] '946b3b0000000000'
	
	* 
	* ==> etcd [0beca8ac2d3b] <==
	* {"level":"info","ts":"2023-10-02T10:54:37.837749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T10:54:37.837868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T10:54:37.837967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T10:54:37.838077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T10:54:37.841369Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:54:37.845473Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-899833 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T10:54:37.845649Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:54:37.846734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T10:54:37.846933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:54:37.847919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-02T10:54:37.848246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:54:37.848465Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:54:37.848604Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:54:37.849211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T10:54:37.849345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T10:57:04.203899Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-02T10:57:04.203974Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-899833","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	{"level":"warn","ts":"2023-10-02T10:57:04.20406Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.58.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T10:57:04.204082Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.58.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T10:57:04.204145Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T10:57:04.204227Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-02T10:57:04.25479Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2023-10-02T10:57:04.256754Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T10:57:04.256882Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T10:57:04.256891Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-899833","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [7976a4a982ba] <==
	* {"level":"info","ts":"2023-10-02T10:57:49.511743Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:57:49.511819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:57:49.512819Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-10-02T10:57:49.525446Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T10:57:49.525554Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T10:57:49.525583Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T10:57:49.528687Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T10:57:49.528932Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T10:57:49.533277Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T10:57:49.533444Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T10:57:49.533486Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T10:57:49.754708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T10:57:49.754965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T10:57:49.755104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T10:57:49.755378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T10:57:49.75544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-10-02T10:57:49.755514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T10:57:49.755571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-10-02T10:57:49.757446Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-899833 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T10:57:49.757676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:57:49.757923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:57:49.765512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-02T10:57:49.766231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T10:57:49.766303Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T10:57:49.769275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:01:33 up 18:44,  0 users,  load average: 0.64, 1.50, 2.05
	Linux multinode-899833 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [65189e7d31ed] <==
	* I1002 10:56:20.346579       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 10:56:30.350864       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 10:56:30.350894       1 main.go:227] handling current node
	I1002 10:56:30.350905       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 10:56:30.350912       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 10:56:40.364102       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 10:56:40.364132       1 main.go:227] handling current node
	I1002 10:56:40.364149       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 10:56:40.364155       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 10:56:40.364247       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 10:56:40.364259       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	I1002 10:56:40.364292       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.58.4 Flags: [] Table: 0} 
	I1002 10:56:50.368875       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 10:56:50.368914       1 main.go:227] handling current node
	I1002 10:56:50.368927       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 10:56:50.368942       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 10:56:50.369032       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 10:56:50.369047       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	I1002 10:57:00.374711       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 10:57:00.374748       1 main.go:227] handling current node
	I1002 10:57:00.374761       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 10:57:00.374767       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 10:57:00.375131       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 10:57:00.375150       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.3.0/24] 
	I1002 10:57:00.375241       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.58.4 Flags: [] Table: 0} 
	
	* 
	* ==> kindnet [ad76da9a5239] <==
	* I1002 11:00:45.924085       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	I1002 11:00:55.929797       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 11:00:55.929827       1 main.go:227] handling current node
	I1002 11:00:55.929837       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 11:00:55.929843       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 11:00:55.933411       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 11:00:55.933444       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	I1002 11:01:05.937772       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 11:01:05.937803       1 main.go:227] handling current node
	I1002 11:01:05.937814       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 11:01:05.937820       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 11:01:05.938181       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 11:01:05.938202       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	I1002 11:01:15.951825       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 11:01:15.951854       1 main.go:227] handling current node
	I1002 11:01:15.951864       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 11:01:15.951870       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 11:01:15.952139       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 11:01:15.952157       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	I1002 11:01:25.965082       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 11:01:25.965117       1 main.go:227] handling current node
	I1002 11:01:25.965128       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 11:01:25.965134       1 main.go:250] Node multinode-899833-m02 has CIDR [10.244.1.0/24] 
	I1002 11:01:25.965582       1 main.go:223] Handling node with IPs: map[192.168.58.4:{}]
	I1002 11:01:25.965611       1 main.go:250] Node multinode-899833-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [1bdae6fab8f9] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 10:57:14.215151       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 10:57:14.227991       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 10:57:14.238804       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [dd9314a96fe3] <==
	* I1002 10:57:53.253239       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1002 10:57:53.253880       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 10:57:53.256189       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 10:57:53.301065       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 10:57:53.301524       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 10:57:53.352794       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 10:57:53.353148       1 aggregator.go:166] initial CRD sync complete...
	I1002 10:57:53.353301       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 10:57:53.353425       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 10:57:53.353524       1 cache.go:39] Caches are synced for autoregister controller
	I1002 10:57:53.362024       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 10:57:53.362612       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 10:57:53.362753       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 10:57:53.363280       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 10:57:53.368445       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 10:57:53.369862       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1002 10:57:53.391035       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 10:57:54.162652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 10:57:55.914070       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 10:57:56.049617       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 10:57:56.064713       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 10:57:56.176780       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 10:57:56.186346       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 10:58:05.936262       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 10:58:06.041925       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [4c69369fa41b] <==
	* I1002 10:58:46.152326       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899833-m02\" does not exist"
	I1002 10:58:46.177876       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-899833-m02" podCIDRs=["10.244.1.0/24"]
	I1002 10:58:46.427078       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:58:47.053053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.979µs"
	I1002 10:58:47.152080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="127.17µs"
	I1002 10:58:47.236114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.231µs"
	I1002 10:58:47.239081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.886µs"
	I1002 10:58:50.940264       1 event.go:307] "Event occurred" object="multinode-899833-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-899833-m02 event: Registered Node multinode-899833-m02 in Controller"
	I1002 10:59:00.905720       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-f9ffb"
	I1002 10:59:00.919170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.996827ms"
	I1002 10:59:00.919313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="105.362µs"
	I1002 10:59:00.929436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.716296ms"
	I1002 10:59:00.929767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.983µs"
	I1002 10:59:00.935202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.896µs"
	I1002 10:59:02.479641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.200552ms"
	I1002 10:59:02.479900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="148.356µs"
	I1002 10:59:03.919018       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:59:04.087911       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899833-m03\" does not exist"
	I1002 10:59:04.089696       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-zwsch" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-zwsch"
	I1002 10:59:04.094957       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:59:04.123666       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-899833-m03" podCIDRs=["10.244.2.0/24"]
	I1002 10:59:12.335213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.125µs"
	I1002 10:59:12.884459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.063µs"
	I1002 10:59:12.891531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="104.5µs"
	I1002 10:59:12.894009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.52µs"
	
	* 
	* ==> kube-controller-manager [c595b0a59f0e] <==
	* I1002 10:55:34.163804       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1002 10:55:34.183918       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wzmtg"
	I1002 10:55:34.193714       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-n7gl6"
	I1002 10:55:34.232792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.683637ms"
	I1002 10:55:34.266293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.172812ms"
	I1002 10:55:34.266600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.532µs"
	I1002 10:55:34.266866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.565µs"
	I1002 10:55:36.799407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.613525ms"
	I1002 10:55:36.799509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.019µs"
	I1002 10:55:37.683325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.527407ms"
	I1002 10:55:37.683391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.062µs"
	I1002 10:56:07.978013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="83.798µs"
	I1002 10:56:32.230473       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:56:32.231387       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899833-m03\" does not exist"
	I1002 10:56:32.252076       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-899833-m03" podCIDRs=["10.244.2.0/24"]
	I1002 10:56:32.259999       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jbhdj"
	I1002 10:56:32.264374       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xnhqd"
	I1002 10:56:32.332327       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:56:35.419759       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-899833-m03"
	I1002 10:56:35.420033       1 event.go:307] "Event occurred" object="multinode-899833-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-899833-m03 event: Registered Node multinode-899833-m03 in Controller"
	I1002 10:56:59.201899       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:56:59.914043       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899833-m03\" does not exist"
	I1002 10:56:59.914290       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	I1002 10:56:59.922918       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-899833-m03" podCIDRs=["10.244.3.0/24"]
	I1002 10:57:00.011027       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899833-m02"
	
	* 
	* ==> kube-proxy [7264383872ff] <==
	* I1002 10:54:57.241428       1 server_others.go:69] "Using iptables proxy"
	I1002 10:54:57.263524       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1002 10:54:57.320685       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 10:54:57.324030       1 server_others.go:152] "Using iptables Proxier"
	I1002 10:54:57.324064       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 10:54:57.324072       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 10:54:57.324121       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 10:54:57.324356       1 server.go:846] "Version info" version="v1.28.2"
	I1002 10:54:57.324366       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:54:57.327340       1 config.go:188] "Starting service config controller"
	I1002 10:54:57.329975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 10:54:57.330019       1 config.go:97] "Starting endpoint slice config controller"
	I1002 10:54:57.330025       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 10:54:57.332026       1 config.go:315] "Starting node config controller"
	I1002 10:54:57.332037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 10:54:57.435068       1 shared_informer.go:318] Caches are synced for node config
	I1002 10:54:57.435099       1 shared_informer.go:318] Caches are synced for service config
	I1002 10:54:57.435164       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [f5440d8f11db] <==
	* I1002 10:57:55.183326       1 server_others.go:69] "Using iptables proxy"
	I1002 10:57:55.300400       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1002 10:57:55.377346       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 10:57:55.379700       1 server_others.go:152] "Using iptables Proxier"
	I1002 10:57:55.379862       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 10:57:55.379956       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 10:57:55.380087       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 10:57:55.380440       1 server.go:846] "Version info" version="v1.28.2"
	I1002 10:57:55.380757       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:57:55.381652       1 config.go:188] "Starting service config controller"
	I1002 10:57:55.381786       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 10:57:55.381920       1 config.go:97] "Starting endpoint slice config controller"
	I1002 10:57:55.381992       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 10:57:55.382554       1 config.go:315] "Starting node config controller"
	I1002 10:57:55.384097       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 10:57:55.482993       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 10:57:55.483199       1 shared_informer.go:318] Caches are synced for service config
	I1002 10:57:55.486274       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [318bddb38652] <==
	* I1002 10:57:52.525238       1 serving.go:348] Generated self-signed cert in-memory
	I1002 10:57:53.429990       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 10:57:53.430078       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:57:53.434224       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 10:57:53.434268       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 10:57:53.434424       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 10:57:53.434442       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:57:53.434529       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 10:57:53.434544       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 10:57:53.435120       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 10:57:53.435296       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 10:57:53.535186       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 10:57:53.535299       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:57:53.535193       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kube-scheduler [a82e59828796] <==
	* W1002 10:54:40.586424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 10:54:40.587903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 10:54:40.586526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 10:54:40.588054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 10:54:40.588342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:54:40.589067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:54:40.587027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 10:54:40.589760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 10:54:40.589925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 10:54:40.590094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 10:54:40.590183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 10:54:41.480428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:54:41.480471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 10:54:41.546173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 10:54:41.546402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 10:54:41.557405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 10:54:41.557615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 10:54:41.638769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 10:54:41.638996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 10:54:41.643733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:54:41.643938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 10:54:41.843550       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:54:41.843592       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 10:54:43.578329       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1002 10:57:04.190014       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.008492    1491 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03-cni-cfg\") pod \"kindnet-kp6fb\" (UID: \"260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03\") " pod="kube-system/kindnet-kp6fb"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.008614    1491 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/97d5bb7f-502d-4838-a926-c613783c1588-tmp\") pod \"storage-provisioner\" (UID: \"97d5bb7f-502d-4838-a926-c613783c1588\") " pod="kube-system/storage-provisioner"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.008771    1491 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d159cb7-69ca-4b3c-b918-b698bb157220-lib-modules\") pod \"kube-proxy-fjcp8\" (UID: \"2d159cb7-69ca-4b3c-b918-b698bb157220\") " pod="kube-system/kube-proxy-fjcp8"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.008888    1491 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03-xtables-lock\") pod \"kindnet-kp6fb\" (UID: \"260d72b2-ef9d-48eb-9b6c-b9b8bfebfb03\") " pod="kube-system/kindnet-kp6fb"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.008983    1491 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d159cb7-69ca-4b3c-b918-b698bb157220-xtables-lock\") pod \"kube-proxy-fjcp8\" (UID: \"2d159cb7-69ca-4b3c-b918-b698bb157220\") " pod="kube-system/kube-proxy-fjcp8"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.662977    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84a049cae7c692f9348918379ba1cbf4654c80fb9472d8f5bc90fc692ddbd83"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.785440    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d358fb9f4c99540b6c65e8a508fb547486aed1b4280f859262b8d76af869f27"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.846612    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8ec693dfd8ccad3cb879b64d76946a43643d9cc08213b5ada2c53272d23411"
	Oct 02 10:57:54 multinode-899833 kubelet[1491]: I1002 10:57:54.891459    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c65eb62ee2451e7e5d7983c74bc08a8dba51e9c202a63b2387c3163e3833835"
	Oct 02 10:57:55 multinode-899833 kubelet[1491]: I1002 10:57:55.082435    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62782a095337862f2d49787ff144ac99559fa617623231c6555e51404973ac89"
	Oct 02 10:57:57 multinode-899833 kubelet[1491]: I1002 10:57:57.170968    1491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 10:57:59 multinode-899833 kubelet[1491]: E1002 10:57:59.312481    1491 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 02 10:57:59 multinode-899833 kubelet[1491]: E1002 10:57:59.313494    1491 helpers.go:677] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Oct 02 10:58:10 multinode-899833 kubelet[1491]: E1002 10:58:10.453421    1491 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 02 10:58:10 multinode-899833 kubelet[1491]: E1002 10:58:10.453488    1491 helpers.go:677] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Oct 02 10:58:21 multinode-899833 kubelet[1491]: E1002 10:58:21.578961    1491 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 02 10:58:21 multinode-899833 kubelet[1491]: E1002 10:58:21.579022    1491 helpers.go:677] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Oct 02 10:58:25 multinode-899833 kubelet[1491]: I1002 10:58:25.434475    1491 scope.go:117] "RemoveContainer" containerID="71790b749215b1665857b2b31e1f8ebb7b3929fcdb21b59235a055e77daf99f8"
	Oct 02 10:58:25 multinode-899833 kubelet[1491]: I1002 10:58:25.435060    1491 scope.go:117] "RemoveContainer" containerID="882271c4c708f20e5c89dff972d9444d2a01f38fe752a8d8f75caa16a021f92b"
	Oct 02 10:58:25 multinode-899833 kubelet[1491]: E1002 10:58:25.435377    1491 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(97d5bb7f-502d-4838-a926-c613783c1588)\"" pod="kube-system/storage-provisioner" podUID="97d5bb7f-502d-4838-a926-c613783c1588"
	Oct 02 10:58:32 multinode-899833 kubelet[1491]: E1002 10:58:32.672967    1491 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 02 10:58:32 multinode-899833 kubelet[1491]: E1002 10:58:32.673022    1491 helpers.go:677] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Oct 02 10:58:36 multinode-899833 kubelet[1491]: I1002 10:58:36.041324    1491 scope.go:117] "RemoveContainer" containerID="882271c4c708f20e5c89dff972d9444d2a01f38fe752a8d8f75caa16a021f92b"
	Oct 02 10:58:43 multinode-899833 kubelet[1491]: E1002 10:58:43.827267    1491 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 02 10:58:43 multinode-899833 kubelet[1491]: E1002 10:58:43.827321    1491 helpers.go:677] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-899833 -n multinode-899833
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-899833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (272.03s)

                                                
                                    

Test pass (293/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.18
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.2/json-events 6.99
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.6
20 TestOffline 102.5
22 TestAddons/Setup 146.18
24 TestAddons/parallel/Registry 15.3
26 TestAddons/parallel/InspektorGadget 10.85
27 TestAddons/parallel/MetricsServer 5.8
30 TestAddons/parallel/CSI 62.89
31 TestAddons/parallel/Headlamp 12.46
32 TestAddons/parallel/CloudSpanner 5.71
33 TestAddons/parallel/LocalPath 53.64
36 TestAddons/serial/GCPAuth/Namespaces 0.17
37 TestAddons/StoppedEnableDisable 11.32
38 TestCertOptions 37.89
39 TestCertExpiration 254.1
40 TestDockerFlags 43.15
41 TestForceSystemdFlag 38.88
42 TestForceSystemdEnv 42.05
48 TestErrorSpam/setup 35.72
49 TestErrorSpam/start 0.86
50 TestErrorSpam/status 1.13
51 TestErrorSpam/pause 1.41
52 TestErrorSpam/unpause 1.53
53 TestErrorSpam/stop 2.18
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 45.49
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 38.5
60 TestFunctional/serial/KubeContext 0.07
61 TestFunctional/serial/KubectlGetPods 0.1
64 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
65 TestFunctional/serial/CacheCmd/cache/add_local 0.98
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
67 TestFunctional/serial/CacheCmd/cache/list 0.06
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
70 TestFunctional/serial/CacheCmd/cache/delete 0.12
71 TestFunctional/serial/MinikubeKubectlCmd 0.15
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
73 TestFunctional/serial/ExtraConfig 38.56
74 TestFunctional/serial/ComponentHealth 0.1
75 TestFunctional/serial/LogsCmd 1.37
76 TestFunctional/serial/LogsFileCmd 1.37
77 TestFunctional/serial/InvalidService 4.88
79 TestFunctional/parallel/ConfigCmd 0.47
80 TestFunctional/parallel/DashboardCmd 14.1
81 TestFunctional/parallel/DryRun 0.49
82 TestFunctional/parallel/InternationalLanguage 0.25
83 TestFunctional/parallel/StatusCmd 1.37
87 TestFunctional/parallel/ServiceCmdConnect 11.69
88 TestFunctional/parallel/AddonsCmd 0.16
89 TestFunctional/parallel/PersistentVolumeClaim 25.23
91 TestFunctional/parallel/SSHCmd 0.7
92 TestFunctional/parallel/CpCmd 1.47
94 TestFunctional/parallel/FileSync 0.35
95 TestFunctional/parallel/CertSync 2.43
99 TestFunctional/parallel/NodeLabels 0.09
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
103 TestFunctional/parallel/License 0.32
104 TestFunctional/parallel/Version/short 0.1
105 TestFunctional/parallel/Version/components 0.82
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
110 TestFunctional/parallel/ImageCommands/ImageBuild 2.95
111 TestFunctional/parallel/ImageCommands/Setup 2.69
112 TestFunctional/parallel/DockerEnv/bash 1.37
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.73
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
118 TestFunctional/parallel/ProfileCmd/profile_list 0.51
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.75
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.09
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.79
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.06
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.39
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.17
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.01
137 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
138 TestFunctional/parallel/ServiceCmd/List 0.54
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
141 TestFunctional/parallel/ServiceCmd/Format 0.42
142 TestFunctional/parallel/ServiceCmd/URL 0.43
143 TestFunctional/parallel/MountCmd/any-port 8.43
144 TestFunctional/parallel/MountCmd/specific-port 2.33
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.23
146 TestFunctional/delete_addon-resizer_images 0.09
147 TestFunctional/delete_my-image_image 0.02
148 TestFunctional/delete_minikube_cached_images 0.03
152 TestImageBuild/serial/Setup 34.52
153 TestImageBuild/serial/NormalBuild 1.94
154 TestImageBuild/serial/BuildWithBuildArg 0.91
155 TestImageBuild/serial/BuildWithDockerIgnore 0.75
156 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
159 TestIngressAddonLegacy/StartLegacyK8sCluster 84.55
161 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.49
162 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
166 TestJSONOutput/start/Command 46.12
167 TestJSONOutput/start/Audit 0
169 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Command 0.64
173 TestJSONOutput/pause/Audit 0
175 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Command 0.59
179 TestJSONOutput/unpause/Audit 0
181 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/stop/Command 10.95
185 TestJSONOutput/stop/Audit 0
187 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
189 TestErrorJSONOutput 0.22
191 TestKicCustomNetwork/create_custom_network 33.79
192 TestKicCustomNetwork/use_default_bridge_network 39.11
193 TestKicExistingNetwork 36.49
194 TestKicCustomSubnet 33.5
195 TestKicStaticIP 34.41
196 TestMainNoArgs 0.05
197 TestMinikubeProfile 73
200 TestMountStart/serial/StartWithMountFirst 10.58
201 TestMountStart/serial/VerifyMountFirst 0.28
202 TestMountStart/serial/StartWithMountSecond 7.57
203 TestMountStart/serial/VerifyMountSecond 0.28
204 TestMountStart/serial/DeleteFirst 1.53
205 TestMountStart/serial/VerifyMountPostDelete 0.28
206 TestMountStart/serial/Stop 1.22
207 TestMountStart/serial/RestartStopped 8.08
208 TestMountStart/serial/VerifyMountPostStop 0.28
211 TestMultiNode/serial/FreshStart2Nodes 83.19
212 TestMultiNode/serial/DeployApp2Nodes 42.88
213 TestMultiNode/serial/PingHostFrom2Pods 1.11
214 TestMultiNode/serial/AddNode 18.32
215 TestMultiNode/serial/ProfileList 0.39
216 TestMultiNode/serial/CopyFile 11.19
217 TestMultiNode/serial/StopNode 2.44
218 TestMultiNode/serial/StartAfterStop 13.63
220 TestMultiNode/serial/DeleteNode 5.17
221 TestMultiNode/serial/StopMultiNode 21.8
222 TestMultiNode/serial/RestartMultiNode 90.19
223 TestMultiNode/serial/ValidateNameConflict 40.45
228 TestPreload 131.99
230 TestScheduledStopUnix 106.05
231 TestSkaffold 108.52
233 TestInsufficientStorage 14.69
234 TestRunningBinaryUpgrade 105.45
236 TestKubernetesUpgrade 426.26
237 TestMissingContainerUpgrade 197.98
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/StartWithK8s 44.88
241 TestNoKubernetes/serial/StartWithStopK8s 8.24
242 TestNoKubernetes/serial/Start 11.09
243 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
244 TestNoKubernetes/serial/ProfileList 0.97
245 TestNoKubernetes/serial/Stop 1.27
246 TestNoKubernetes/serial/StartNoArgs 7.3
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
248 TestStoppedBinaryUpgrade/Setup 1.05
249 TestStoppedBinaryUpgrade/Upgrade 100.15
250 TestStoppedBinaryUpgrade/MinikubeLogs 1.93
259 TestPause/serial/Start 93.64
271 TestPause/serial/SecondStartNoReconfiguration 43.87
272 TestPause/serial/Pause 0.62
273 TestPause/serial/VerifyStatus 0.4
274 TestPause/serial/Unpause 0.73
275 TestPause/serial/PauseAgain 1.03
276 TestPause/serial/DeletePaused 2.45
277 TestPause/serial/VerifyDeletedResources 0.51
279 TestStartStop/group/old-k8s-version/serial/FirstStart 141.97
280 TestStartStop/group/old-k8s-version/serial/DeployApp 9.68
281 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.29
282 TestStartStop/group/old-k8s-version/serial/Stop 11.16
284 TestStartStop/group/no-preload/serial/FirstStart 106.08
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
286 TestStartStop/group/old-k8s-version/serial/SecondStart 405.89
287 TestStartStop/group/no-preload/serial/DeployApp 8.5
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
289 TestStartStop/group/no-preload/serial/Stop 11.13
290 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
291 TestStartStop/group/no-preload/serial/SecondStart 327.57
292 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.15
293 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.59
295 TestStartStop/group/old-k8s-version/serial/Pause 4.76
297 TestStartStop/group/embed-certs/serial/FirstStart 94.39
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 18.04
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
301 TestStartStop/group/no-preload/serial/Pause 3.63
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.47
304 TestStartStop/group/embed-certs/serial/DeployApp 9.57
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
306 TestStartStop/group/embed-certs/serial/Stop 11.09
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/embed-certs/serial/SecondStart 320.76
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.6
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.32
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.04
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.42
317 TestStartStop/group/embed-certs/serial/Pause 3.2
319 TestStartStop/group/newest-cni/serial/FirstStart 56.91
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.04
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.82
323 TestStartStop/group/newest-cni/serial/Stop 8.21
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
325 TestStartStop/group/newest-cni/serial/SecondStart 36.92
326 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.11
329 TestNetworkPlugins/group/auto/Start 98.69
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
333 TestStartStop/group/newest-cni/serial/Pause 4.25
334 TestNetworkPlugins/group/kindnet/Start 70.67
335 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
336 TestNetworkPlugins/group/auto/KubeletFlags 0.37
337 TestNetworkPlugins/group/auto/NetCatPod 10.42
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
339 TestNetworkPlugins/group/kindnet/NetCatPod 9.43
340 TestNetworkPlugins/group/auto/DNS 0.22
341 TestNetworkPlugins/group/auto/Localhost 0.18
342 TestNetworkPlugins/group/auto/HairPin 0.25
343 TestNetworkPlugins/group/kindnet/DNS 0.2
344 TestNetworkPlugins/group/kindnet/Localhost 0.19
345 TestNetworkPlugins/group/kindnet/HairPin 0.2
346 TestNetworkPlugins/group/calico/Start 87.48
347 TestNetworkPlugins/group/custom-flannel/Start 70.46
348 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
349 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.5
350 TestNetworkPlugins/group/calico/ControllerPod 5.03
351 TestNetworkPlugins/group/custom-flannel/DNS 0.24
352 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
353 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
354 TestNetworkPlugins/group/calico/KubeletFlags 0.31
355 TestNetworkPlugins/group/calico/NetCatPod 11.49
356 TestNetworkPlugins/group/calico/DNS 0.3
357 TestNetworkPlugins/group/calico/Localhost 0.36
358 TestNetworkPlugins/group/calico/HairPin 0.34
359 TestNetworkPlugins/group/false/Start 60.32
360 TestNetworkPlugins/group/enable-default-cni/Start 59.13
361 TestNetworkPlugins/group/false/KubeletFlags 0.37
362 TestNetworkPlugins/group/false/NetCatPod 10.42
363 TestNetworkPlugins/group/false/DNS 0.27
364 TestNetworkPlugins/group/false/Localhost 0.21
365 TestNetworkPlugins/group/false/HairPin 0.24
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.62
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.29
371 TestNetworkPlugins/group/flannel/Start 68.97
372 TestNetworkPlugins/group/bridge/Start 95.11
373 TestNetworkPlugins/group/flannel/ControllerPod 5.04
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
375 TestNetworkPlugins/group/flannel/NetCatPod 11.38
376 TestNetworkPlugins/group/flannel/DNS 0.23
377 TestNetworkPlugins/group/flannel/Localhost 0.21
378 TestNetworkPlugins/group/flannel/HairPin 0.19
379 TestNetworkPlugins/group/kubenet/Start 56.43
380 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
381 TestNetworkPlugins/group/bridge/NetCatPod 11.43
382 TestNetworkPlugins/group/bridge/DNS 0.29
383 TestNetworkPlugins/group/bridge/Localhost 0.23
384 TestNetworkPlugins/group/bridge/HairPin 0.27
385 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
386 TestNetworkPlugins/group/kubenet/NetCatPod 11.33
387 TestNetworkPlugins/group/kubenet/DNS 0.22
388 TestNetworkPlugins/group/kubenet/Localhost 0.17
389 TestNetworkPlugins/group/kubenet/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (8.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-211888 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-211888 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.180475377s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-211888
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-211888: exit status 85 (69.887343ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-211888 | jenkins | v1.31.2 | 02 Oct 23 10:35 UTC |          |
	|         | -p download-only-211888        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:35:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:35:52.334437 2139705 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:35:52.334567 2139705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:35:52.334574 2139705 out.go:309] Setting ErrFile to fd 2...
	I1002 10:35:52.334580 2139705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:35:52.334828 2139705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	W1002 10:35:52.334976 2139705 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17340-2134307/.minikube/config/config.json: open /home/jenkins/minikube-integration/17340-2134307/.minikube/config/config.json: no such file or directory
	I1002 10:35:52.335387 2139705 out.go:303] Setting JSON to true
	I1002 10:35:52.336336 2139705 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65900,"bootTime":1696177053,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:35:52.336408 2139705 start.go:138] virtualization:  
	I1002 10:35:52.339397 2139705 out.go:97] [download-only-211888] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:35:52.341578 2139705 out.go:169] MINIKUBE_LOCATION=17340
	W1002 10:35:52.339629 2139705 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 10:35:52.339718 2139705 notify.go:220] Checking for updates...
	I1002 10:35:52.343865 2139705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:35:52.345835 2139705 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:35:52.347602 2139705 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:35:52.349590 2139705 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 10:35:52.352930 2139705 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 10:35:52.353279 2139705 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:35:52.381301 2139705 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:35:52.381391 2139705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:35:52.477624 2139705 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-02 10:35:52.467470992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:35:52.477746 2139705 docker.go:294] overlay module found
	I1002 10:35:52.479711 2139705 out.go:97] Using the docker driver based on user configuration
	I1002 10:35:52.479737 2139705 start.go:298] selected driver: docker
	I1002 10:35:52.479749 2139705 start.go:902] validating driver "docker" against <nil>
	I1002 10:35:52.479861 2139705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:35:52.547311 2139705 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-02 10:35:52.536930031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:35:52.547506 2139705 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:35:52.547822 2139705 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1002 10:35:52.548013 2139705 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 10:35:52.550108 2139705 out.go:169] Using Docker driver with root privileges
	I1002 10:35:52.551705 2139705 cni.go:84] Creating CNI manager for ""
	I1002 10:35:52.551731 2139705 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 10:35:52.551743 2139705 start_flags.go:321] config:
	{Name:download-only-211888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-211888 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:35:52.553857 2139705 out.go:97] Starting control plane node download-only-211888 in cluster download-only-211888
	I1002 10:35:52.553882 2139705 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:35:52.555530 2139705 out.go:97] Pulling base image ...
	I1002 10:35:52.555557 2139705 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 10:35:52.555602 2139705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:35:52.572623 2139705 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 10:35:52.572808 2139705 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 10:35:52.572907 2139705 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 10:35:52.628394 2139705 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 10:35:52.628422 2139705 cache.go:57] Caching tarball of preloaded images
	I1002 10:35:52.628608 2139705 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 10:35:52.630816 2139705 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 10:35:52.630843 2139705 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 10:35:52.749800 2139705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-211888"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (6.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-211888 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-211888 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.991836123s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (6.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-211888
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-211888: exit status 85 (72.184449ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-211888 | jenkins | v1.31.2 | 02 Oct 23 10:35 UTC |          |
	|         | -p download-only-211888        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-211888 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |          |
	|         | -p download-only-211888        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:36:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:36:00.592180 2139780 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:36:00.592341 2139780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:00.592361 2139780 out.go:309] Setting ErrFile to fd 2...
	I1002 10:36:00.592367 2139780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:00.592614 2139780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	W1002 10:36:00.592755 2139780 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17340-2134307/.minikube/config/config.json: open /home/jenkins/minikube-integration/17340-2134307/.minikube/config/config.json: no such file or directory
	I1002 10:36:00.592990 2139780 out.go:303] Setting JSON to true
	I1002 10:36:00.593860 2139780 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65908,"bootTime":1696177053,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:36:00.593932 2139780 start.go:138] virtualization:  
	I1002 10:36:00.596299 2139780 out.go:97] [download-only-211888] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:36:00.598232 2139780 out.go:169] MINIKUBE_LOCATION=17340
	I1002 10:36:00.596602 2139780 notify.go:220] Checking for updates...
	I1002 10:36:00.600111 2139780 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:36:00.601894 2139780 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:36:00.603736 2139780 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:36:00.605730 2139780 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 10:36:00.609126 2139780 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 10:36:00.609662 2139780 config.go:182] Loaded profile config "download-only-211888": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1002 10:36:00.609759 2139780 start.go:810] api.Load failed for download-only-211888: filestore "download-only-211888": Docker machine "download-only-211888" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 10:36:00.609876 2139780 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 10:36:00.609908 2139780 start.go:810] api.Load failed for download-only-211888: filestore "download-only-211888": Docker machine "download-only-211888" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 10:36:00.633580 2139780 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:36:00.633680 2139780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:36:00.714726 2139780 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-02 10:36:00.704155415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:36:00.714825 2139780 docker.go:294] overlay module found
	I1002 10:36:00.716934 2139780 out.go:97] Using the docker driver based on existing profile
	I1002 10:36:00.716956 2139780 start.go:298] selected driver: docker
	I1002 10:36:00.716975 2139780 start.go:902] validating driver "docker" against &{Name:download-only-211888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-211888 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:00.717167 2139780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:36:00.790724 2139780 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-02 10:36:00.773183914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:36:00.791137 2139780 cni.go:84] Creating CNI manager for ""
	I1002 10:36:00.791160 2139780 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 10:36:00.791175 2139780 start_flags.go:321] config:
	{Name:download-only-211888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-211888 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:00.793523 2139780 out.go:97] Starting control plane node download-only-211888 in cluster download-only-211888
	I1002 10:36:00.793544 2139780 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 10:36:00.795597 2139780 out.go:97] Pulling base image ...
	I1002 10:36:00.795623 2139780 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:36:00.795786 2139780 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 10:36:00.812654 2139780 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 10:36:00.812818 2139780 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 10:36:00.812841 2139780 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I1002 10:36:00.812850 2139780 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I1002 10:36:00.812859 2139780 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 10:36:00.863230 2139780 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 10:36:00.863251 2139780 cache.go:57] Caching tarball of preloaded images
	I1002 10:36:00.863390 2139780 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 10:36:00.865524 2139780 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 10:36:00.865559 2139780 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I1002 10:36:00.981576 2139780 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4?checksum=md5:48f32a2a1ca4194a6d2a21c3ded2b2db -> /home/jenkins/minikube-integration/17340-2134307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-211888"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-211888
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-087071 --alsologtostderr --binary-mirror http://127.0.0.1:36211 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-087071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-087071
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (102.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-350719 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-350719 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m39.154100947s)
helpers_test.go:175: Cleaning up "offline-docker-350719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-350719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-350719: (3.342899679s)
--- PASS: TestOffline (102.50s)

                                                
                                    
x
+
TestAddons/Setup (146.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p addons-358443 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p addons-358443 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m26.1758403s)
--- PASS: TestAddons/Setup (146.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 40.104703ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-77zwl" [eccb8ea5-8a7b-4635-ae3e-581e52d381b3] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020241225s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vtjvv" [9fdaa465-d280-4acb-926f-0390823f5a3a] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01244823s
addons_test.go:318: (dbg) Run:  kubectl --context addons-358443 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-358443 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-358443 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.000779785s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 ip
addons_test.go:366: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hv59k" [fcafd568-d56c-41eb-9788-1742f22a212e] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012656297s
addons_test.go:819: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-358443
addons_test.go:819: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-358443: (5.835899006s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 4.501421ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-6k96t" [1ce4d173-fab0-4400-a21a-28781c10d1c9] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019762328s
addons_test.go:393: (dbg) Run:  kubectl --context addons-358443 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 45.248583ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-358443 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-358443 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [13e71200-c473-4de9-8a4f-206455eb67e7] Pending
2023/10/02 10:38:49 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:344: "task-pv-pod" [13e71200-c473-4de9-8a4f-206455eb67e7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [13e71200-c473-4de9-8a4f-206455eb67e7] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.027475679s
addons_test.go:562: (dbg) Run:  kubectl --context addons-358443 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-358443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-358443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-358443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-358443 delete pod task-pv-pod
addons_test.go:572: (dbg) Done: kubectl --context addons-358443 delete pod task-pv-pod: (1.077713303s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-358443 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-358443 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-358443 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [eb0154fe-d8f3-481b-8b81-4ab600fd87e7] Pending
helpers_test.go:344: "task-pv-pod-restore" [eb0154fe-d8f3-481b-8b81-4ab600fd87e7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [eb0154fe-d8f3-481b-8b81-4ab600fd87e7] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.02458195s
addons_test.go:604: (dbg) Run:  kubectl --context addons-358443 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-358443 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-358443 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-arm64 -p addons-358443 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.747842518s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-358443 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-358443 --alsologtostderr -v=1: (1.420932151s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-cprhb" [a6eaa657-5f15-4f00-8143-9f70dcb9c2bf] Pending
helpers_test.go:344: "headlamp-58b88cff49-cprhb" [a6eaa657-5f15-4f00-8143-9f70dcb9c2bf] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-cprhb" [a6eaa657-5f15-4f00-8143-9f70dcb9c2bf] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.036052284s
--- PASS: TestAddons/parallel/Headlamp (12.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-6kxw5" [1d2fe7f6-f349-41b3-8e27-c543dfb76d87] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01243085s
addons_test.go:838: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-358443
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-358443 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-358443 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [028b6980-ecb6-4edd-9128-fbcd5dad7efb] Pending
helpers_test.go:344: "test-local-path" [028b6980-ecb6-4edd-9128-fbcd5dad7efb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [028b6980-ecb6-4edd-9128-fbcd5dad7efb] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [028b6980-ecb6-4edd-9128-fbcd5dad7efb] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.012841503s
addons_test.go:869: (dbg) Run:  kubectl --context addons-358443 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 ssh "cat /opt/local-path-provisioner/pvc-63154e17-9667-4db8-8485-f9cbbc7d7775_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-358443 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-358443 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-arm64 -p addons-358443 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-linux-arm64 -p addons-358443 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.284284607s)
--- PASS: TestAddons/parallel/LocalPath (53.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-358443 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-358443 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-358443
addons_test.go:150: (dbg) Done: out/minikube-linux-arm64 stop -p addons-358443: (10.993261103s)
addons_test.go:154: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-358443
addons_test.go:158: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-358443
addons_test.go:163: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-358443
--- PASS: TestAddons/StoppedEnableDisable (11.32s)

                                                
                                    
x
+
TestCertOptions (37.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-781643 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-781643 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.978351167s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-781643 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-781643 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-781643 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-781643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-781643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-781643: (2.234642903s)
--- PASS: TestCertOptions (37.89s)

                                                
                                    
x
+
TestCertExpiration (254.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-065911 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-065911 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (42.002952001s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-065911 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E1002 11:24:20.138497 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-065911 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.962489983s)
helpers_test.go:175: Cleaning up "cert-expiration-065911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-065911
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-065911: (2.137800133s)
--- PASS: TestCertExpiration (254.10s)

                                                
                                    
x
+
TestDockerFlags (43.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-646757 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-646757 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.231795023s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-646757 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-646757 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-646757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-646757
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-646757: (2.188165397s)
--- PASS: TestDockerFlags (43.15s)

                                                
                                    
x
+
TestForceSystemdFlag (38.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-442036 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 11:19:20.136795 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-442036 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.381443747s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-442036 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-442036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-442036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-442036: (2.107990229s)
--- PASS: TestForceSystemdFlag (38.88s)

                                                
                                    
x
+
TestForceSystemdEnv (42.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-895947 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 11:19:53.504234 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-895947 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.449987928s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-895947 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-895947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-895947
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-895947: (2.205848053s)
--- PASS: TestForceSystemdEnv (42.05s)

                                                
                                    
x
+
TestErrorSpam/setup (35.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-756022 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-756022 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-756022 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-756022 --driver=docker  --container-runtime=docker: (35.724501736s)
--- PASS: TestErrorSpam/setup (35.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (2.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 stop: (1.987072706s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-756022 --log_dir /tmp/nospam-756022 stop
--- PASS: TestErrorSpam/stop (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17340-2134307/.minikube/files/etc/test/nested/copy/2139700/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499029 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-499029 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (45.491983622s)
--- PASS: TestFunctional/serial/StartWithProxy (45.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499029 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-499029 --alsologtostderr -v=8: (38.502271706s)
functional_test.go:659: soft start took 38.502785722s for "functional-499029" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-499029 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 cache add registry.k8s.io/pause:3.1: (1.13110024s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 cache add registry.k8s.io/pause:3.3: (1.141873593s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-499029 /tmp/TestFunctionalserialCacheCmdcacheadd_local2691417287/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cache add minikube-local-cache-test:functional-499029
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cache delete minikube-local-cache-test:functional-499029
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-499029
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (348.83911ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 kubectl -- --context functional-499029 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-499029 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499029 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 10:43:35.509294 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:35.515922 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:35.526362 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:35.546724 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:35.586981 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:35.667342 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:35.827802 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:36.148285 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:36.789222 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:38.069728 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:40.630448 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:45.750914 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:43:55.991511 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-499029 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.556841689s)
functional_test.go:757: restart took 38.556936047s for "functional-499029" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-499029 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 logs: (1.368111365s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 logs --file /tmp/TestFunctionalserialLogsFileCmd1609352672/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 logs --file /tmp/TestFunctionalserialLogsFileCmd1609352672/001/logs.txt: (1.369938356s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-499029 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-499029
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-499029: exit status 115 (593.28454ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30418 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-499029 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 config get cpus: exit status 14 (72.189954ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 config get cpus: exit status 14 (57.52189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-499029 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-499029 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2179663: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499029 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-499029 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (211.219509ms)

                                                
                                                
-- stdout --
	* [functional-499029] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 10:44:54.613940 2178755 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:44:54.614172 2178755 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:44:54.614184 2178755 out.go:309] Setting ErrFile to fd 2...
	I1002 10:44:54.614191 2178755 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:44:54.614510 2178755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:44:54.614920 2178755 out.go:303] Setting JSON to false
	I1002 10:44:54.616071 2178755 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":66442,"bootTime":1696177053,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:44:54.616144 2178755 start.go:138] virtualization:  
	I1002 10:44:54.619961 2178755 out.go:177] * [functional-499029] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 10:44:54.621959 2178755 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:44:54.624057 2178755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:44:54.622127 2178755 notify.go:220] Checking for updates...
	I1002 10:44:54.628083 2178755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:44:54.630463 2178755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:44:54.632771 2178755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 10:44:54.634816 2178755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:44:54.637318 2178755 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:44:54.637880 2178755 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:44:54.663363 2178755 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:44:54.663469 2178755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:44:54.760323 2178755 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 10:44:54.746869342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:44:54.760435 2178755 docker.go:294] overlay module found
	I1002 10:44:54.762516 2178755 out.go:177] * Using the docker driver based on existing profile
	I1002 10:44:54.764599 2178755 start.go:298] selected driver: docker
	I1002 10:44:54.764640 2178755 start.go:902] validating driver "docker" against &{Name:functional-499029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-499029 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:44:54.764762 2178755 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:44:54.767448 2178755 out.go:177] 
	W1002 10:44:54.770562 2178755 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 10:44:54.772817 2178755 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499029 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499029 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-499029 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (249.812395ms)

                                                
                                                
-- stdout --
	* [functional-499029] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 10:44:57.872534 2179395 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:44:57.872711 2179395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:44:57.872717 2179395 out.go:309] Setting ErrFile to fd 2...
	I1002 10:44:57.872723 2179395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:44:57.873072 2179395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:44:57.873511 2179395 out.go:303] Setting JSON to false
	I1002 10:44:57.874647 2179395 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":66445,"bootTime":1696177053,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 10:44:57.874728 2179395 start.go:138] virtualization:  
	I1002 10:44:57.880128 2179395 out.go:177] * [functional-499029] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1002 10:44:57.882611 2179395 notify.go:220] Checking for updates...
	I1002 10:44:57.883296 2179395 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:44:57.885790 2179395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:44:57.887622 2179395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	I1002 10:44:57.889693 2179395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	I1002 10:44:57.893489 2179395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 10:44:57.896051 2179395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:44:57.898332 2179395 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:44:57.898954 2179395 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:44:57.941518 2179395 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 10:44:57.941684 2179395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:44:58.037585 2179395 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 10:44:58.025167254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:44:58.037713 2179395 docker.go:294] overlay module found
	I1002 10:44:58.040099 2179395 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1002 10:44:58.043057 2179395 start.go:298] selected driver: docker
	I1002 10:44:58.043078 2179395 start.go:902] validating driver "docker" against &{Name:functional-499029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-499029 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:44:58.043194 2179395 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:44:58.045747 2179395 out.go:177] 
	W1002 10:44:58.047614 2179395 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 10:44:58.049509 2179395 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 status -o json
E1002 10:44:57.432546 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-499029 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-499029 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-7ttcd" [d8a7160b-b448-4854-a0cb-7ed72b9777ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-7ttcd" [d8a7160b-b448-4854-a0cb-7ed72b9777ba] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.017233204s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32540
functional_test.go:1674: http://192.168.49.2:32540: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-7ttcd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32540
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [400d9b34-f302-42ac-b19d-4980bbc80ccc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015325466s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-499029 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-499029 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499029 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499029 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a4742629-433e-40b8-8338-b09438ff9b01] Pending
helpers_test.go:344: "sp-pod" [a4742629-433e-40b8-8338-b09438ff9b01] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a4742629-433e-40b8-8338-b09438ff9b01] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.031989363s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-499029 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-499029 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499029 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4c4f7483-7358-4278-8b20-d963d9f65b7f] Pending
helpers_test.go:344: "sp-pod" [4c4f7483-7358-4278-8b20-d963d9f65b7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4c4f7483-7358-4278-8b20-d963d9f65b7f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.019819292s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-499029 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh -n functional-499029 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 cp functional-499029:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4244078661/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh -n functional-499029 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2139700/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /etc/test/nested/copy/2139700/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2139700.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /etc/ssl/certs/2139700.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2139700.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /usr/share/ca-certificates/2139700.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/21397002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /etc/ssl/certs/21397002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/21397002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /usr/share/ca-certificates/21397002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E1002 10:44:16.471922 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-499029 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 ssh "sudo systemctl is-active crio": exit status 1 (406.026288ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499029 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-499029
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-499029
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499029 image ls --format short --alsologtostderr:
I1002 10:45:09.728843 2181095 out.go:296] Setting OutFile to fd 1 ...
I1002 10:45:09.729061 2181095 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:09.729088 2181095 out.go:309] Setting ErrFile to fd 2...
I1002 10:45:09.729109 2181095 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:09.729422 2181095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
I1002 10:45:09.730110 2181095 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:09.730301 2181095 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:09.730814 2181095 cli_runner.go:164] Run: docker container inspect functional-499029 --format={{.State.Status}}
I1002 10:45:09.754855 2181095 ssh_runner.go:195] Run: systemctl --version
I1002 10:45:09.754912 2181095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499029
I1002 10:45:09.776907 2181095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35500 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/functional-499029/id_rsa Username:docker}
I1002 10:45:09.875705 2181095 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499029 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-499029 | 6bf3593608666 | 30B    |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.28.2           | 30bb499447fe1 | 120MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-499029 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/nginx                     | alpine            | df8fd1ca35d66 | 43.5MB |
| docker.io/library/nginx                     | latest            | 2a4fbb36e9660 | 192MB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | 7da62c127fc0f | 68.3MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 64fc40cee3716 | 57.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 89d57b83c1786 | 116MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499029 image ls --format table --alsologtostderr:
I1002 10:45:12.454023 2181522 out.go:296] Setting OutFile to fd 1 ...
I1002 10:45:12.454226 2181522 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:12.454254 2181522 out.go:309] Setting ErrFile to fd 2...
I1002 10:45:12.454274 2181522 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:12.454548 2181522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
I1002 10:45:12.455246 2181522 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:12.455478 2181522 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:12.456018 2181522 cli_runner.go:164] Run: docker container inspect functional-499029 --format={{.State.Status}}
I1002 10:45:12.480189 2181522 ssh_runner.go:195] Run: systemctl --version
I1002 10:45:12.480239 2181522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499029
I1002 10:45:12.501300 2181522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35500 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/functional-499029/id_rsa Username:docker}
I1002 10:45:12.599406 2181522 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499029 image ls --format json --alsologtostderr:
[{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"57800000"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"116000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-499029"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f
6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43500000"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"120000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"68300000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kuberne
tesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"6bf3593608666524cc07eb765e686b79eed6b04d8ac7d1908983e389d21810dd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-499029"],"size":"30"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.
10.1"],"size":"51400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499029 image ls --format json --alsologtostderr:
I1002 10:45:12.199514 2181492 out.go:296] Setting OutFile to fd 1 ...
I1002 10:45:12.199752 2181492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:12.199782 2181492 out.go:309] Setting ErrFile to fd 2...
I1002 10:45:12.199804 2181492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:12.200086 2181492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
I1002 10:45:12.200823 2181492 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:12.201068 2181492 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:12.201631 2181492 cli_runner.go:164] Run: docker container inspect functional-499029 --format={{.State.Status}}
I1002 10:45:12.220131 2181492 ssh_runner.go:195] Run: systemctl --version
I1002 10:45:12.220187 2181492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499029
I1002 10:45:12.240660 2181492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35500 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/functional-499029/id_rsa Username:docker}
I1002 10:45:12.339490 2181492 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499029 image ls --format yaml --alsologtostderr:
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43500000"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "120000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "57800000"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "116000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "68300000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-499029
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6bf3593608666524cc07eb765e686b79eed6b04d8ac7d1908983e389d21810dd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-499029
size: "30"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499029 image ls --format yaml --alsologtostderr:
I1002 10:45:09.989076 2181121 out.go:296] Setting OutFile to fd 1 ...
I1002 10:45:09.989329 2181121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:09.989353 2181121 out.go:309] Setting ErrFile to fd 2...
I1002 10:45:09.989371 2181121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:09.989609 2181121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
I1002 10:45:09.990351 2181121 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:09.990564 2181121 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:09.991126 2181121 cli_runner.go:164] Run: docker container inspect functional-499029 --format={{.State.Status}}
I1002 10:45:10.016764 2181121 ssh_runner.go:195] Run: systemctl --version
I1002 10:45:10.016845 2181121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499029
I1002 10:45:10.043532 2181121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35500 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/functional-499029/id_rsa Username:docker}
I1002 10:45:10.159982 2181121 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 ssh pgrep buildkitd: exit status 1 (411.006549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image build -t localhost/my-image:functional-499029 testdata/build --alsologtostderr
2023/10/02 10:45:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 image build -t localhost/my-image:functional-499029 testdata/build --alsologtostderr: (2.292482951s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499029 image build -t localhost/my-image:functional-499029 testdata/build --alsologtostderr:
I1002 10:45:10.691501 2181252 out.go:296] Setting OutFile to fd 1 ...
I1002 10:45:10.692454 2181252 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:10.692473 2181252 out.go:309] Setting ErrFile to fd 2...
I1002 10:45:10.692481 2181252 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:45:10.692816 2181252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
I1002 10:45:10.693616 2181252 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:10.694329 2181252 config.go:182] Loaded profile config "functional-499029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 10:45:10.694919 2181252 cli_runner.go:164] Run: docker container inspect functional-499029 --format={{.State.Status}}
I1002 10:45:10.730595 2181252 ssh_runner.go:195] Run: systemctl --version
I1002 10:45:10.730652 2181252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499029
I1002 10:45:10.766381 2181252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35500 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/functional-499029/id_rsa Username:docker}
I1002 10:45:10.867993 2181252 build_images.go:151] Building image from path: /tmp/build.2698447857.tar
I1002 10:45:10.868067 2181252 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 10:45:10.885004 2181252 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2698447857.tar
I1002 10:45:10.890619 2181252 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2698447857.tar: stat -c "%s %y" /var/lib/minikube/build/build.2698447857.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2698447857.tar': No such file or directory
I1002 10:45:10.890650 2181252 ssh_runner.go:362] scp /tmp/build.2698447857.tar --> /var/lib/minikube/build/build.2698447857.tar (3072 bytes)
I1002 10:45:10.924865 2181252 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2698447857
I1002 10:45:10.936705 2181252 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2698447857 -xf /var/lib/minikube/build/build.2698447857.tar
I1002 10:45:10.949427 2181252 docker.go:340] Building image: /var/lib/minikube/build/build.2698447857
I1002 10:45:10.949498 2181252 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-499029 /var/lib/minikube/build/build.2698447857
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:dc18274e5302f6bf2f98e1281eb046fe57f35f32c29b3d56c6b509efbe5ddc63 done
#8 naming to localhost/my-image:functional-499029 done
#8 DONE 0.0s
I1002 10:45:12.874875 2181252 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-499029 /var/lib/minikube/build/build.2698447857: (1.925349083s)
I1002 10:45:12.874960 2181252 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2698447857
I1002 10:45:12.888025 2181252 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2698447857.tar
I1002 10:45:12.899474 2181252 build_images.go:207] Built localhost/my-image:functional-499029 from /tmp/build.2698447857.tar
I1002 10:45:12.899505 2181252 build_images.go:123] succeeded building to: functional-499029
I1002 10:45:12.899510 2181252 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.653274564s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-499029
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-499029 docker-env) && out/minikube-linux-arm64 status -p functional-499029"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-499029 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image load --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 image load --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr: (4.342686004s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "434.162543ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "73.443445ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "421.243581ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "96.785207ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499029 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499029 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-499029 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2176353: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-499029 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image load --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 image load --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr: (2.845755453s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499029 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-499029 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [68cca805-c7cf-4867-b3b1-8566b5f1e6b1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [68cca805-c7cf-4867-b3b1-8566b5f1e6b1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.0316681s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.487321349s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-499029
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image load --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 image load --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr: (3.333616196s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image save gcr.io/google-containers/addon-resizer:functional-499029 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image rm gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.167175991s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-499029 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.69.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-499029 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-499029
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 image save --daemon gcr.io/google-containers/addon-resizer:functional-499029 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-499029
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-499029 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-499029 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-4v9bp" [6b0ed31e-c036-4e86-bbdc-a4bc0891c08a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-4v9bp" [6b0ed31e-c036-4e86-bbdc-a4bc0891c08a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.016358102s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 service list -o json
functional_test.go:1493: Took "555.951969ms" to run "out/minikube-linux-arm64 -p functional-499029 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31504
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31504
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdany-port2588086199/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696243495057420791" to /tmp/TestFunctionalparallelMountCmdany-port2588086199/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696243495057420791" to /tmp/TestFunctionalparallelMountCmdany-port2588086199/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696243495057420791" to /tmp/TestFunctionalparallelMountCmdany-port2588086199/001/test-1696243495057420791
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (398.65314ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 10:44 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 10:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 10:44 test-1696243495057420791
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh cat /mount-9p/test-1696243495057420791
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-499029 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5f48b28c-6a27-4ccd-a473-4243b5e7e4ce] Pending
helpers_test.go:344: "busybox-mount" [5f48b28c-6a27-4ccd-a473-4243b5e7e4ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5f48b28c-6a27-4ccd-a473-4243b5e7e4ce] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5f48b28c-6a27-4ccd-a473-4243b5e7e4ce] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.022467036s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-499029 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdany-port2588086199/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdspecific-port1770169752/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (586.373935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdspecific-port1770169752/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499029 ssh "sudo umount -f /mount-9p": exit status 1 (421.767943ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-499029 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdspecific-port1770169752/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1315118571/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1315118571/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1315118571/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T" /mount1: (1.342470203s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499029 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-499029 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1315118571/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1315118571/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1315118571/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-499029
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-499029
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-499029
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-182912 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-182912 --driver=docker  --container-runtime=docker: (34.517419257s)
--- PASS: TestImageBuild/serial/Setup (34.52s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-182912
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-182912: (1.936315501s)
--- PASS: TestImageBuild/serial/NormalBuild (1.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-182912
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-182912
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-182912
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-566627 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 10:46:19.352862 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-566627 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m24.550976784s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons enable ingress --alsologtostderr -v=5: (11.487669007s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-566627 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-173609 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1002 10:48:35.508859 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 10:49:03.193431 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-173609 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (46.124225742s)
--- PASS: TestJSONOutput/start/Command (46.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-173609 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-173609 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-173609 --output=json --user=testUser
E1002 10:49:20.137414 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.142784 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.153038 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.173350 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.213647 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.294050 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.454442 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:20.775132 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:21.416007 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:22.696970 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:25.258109 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-173609 --output=json --user=testUser: (10.946010161s)
--- PASS: TestJSONOutput/stop/Command (10.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-319806 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-319806 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.205114ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"169c1329-4e7e-4003-8b47-d28ea79ebefe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-319806] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9782ba20-61cf-49aa-818c-a9b47a1d1416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"04b1d172-32d9-482a-8ad0-b1db19f3e05a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2a826ec-d61c-47af-91d0-fc9fcefdcaca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig"}}
	{"specversion":"1.0","id":"435d6915-1147-475d-8c70-7704b75d88ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube"}}
	{"specversion":"1.0","id":"b123464f-e281-4cb4-acab-3e845b1e10ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1a23c964-b3a0-4531-8dd0-8764ce562f36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d84a625b-c0f7-44a1-b69a-c4417ff2774a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-319806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-319806
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-042014 --network=
E1002 10:49:30.378774 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:49:40.618997 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-042014 --network=: (31.605029557s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-042014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-042014
E1002 10:50:01.099259 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-042014: (2.142632057s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-476630 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-476630 --network=bridge: (37.16364304s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-476630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-476630
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-476630: (1.918940049s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.11s)

                                                
                                    
x
+
TestKicExistingNetwork (36.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-397927 --network=existing-network
E1002 10:50:42.060263 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-397927 --network=existing-network: (34.312663928s)
helpers_test.go:175: Cleaning up "existing-network-397927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-397927
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-397927: (2.010252308s)
--- PASS: TestKicExistingNetwork (36.49s)

                                                
                                    
x
+
TestKicCustomSubnet (33.5s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-632542 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-632542 --subnet=192.168.60.0/24: (31.378612518s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-632542 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-632542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-632542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-632542: (2.096344338s)
--- PASS: TestKicCustomSubnet (33.50s)

                                                
                                    
x
+
TestKicStaticIP (34.41s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-817158 --static-ip=192.168.200.200
E1002 10:52:03.980896 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-817158 --static-ip=192.168.200.200: (32.067007297s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-817158 ip
helpers_test.go:175: Cleaning up "static-ip-817158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-817158
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-817158: (2.143036959s)
--- PASS: TestKicStaticIP (34.41s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-952903 --driver=docker  --container-runtime=docker
E1002 10:52:33.692569 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:33.697984 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:33.708238 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:33.728570 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:33.768814 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:33.849151 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:34.009676 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:34.330202 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:34.971214 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:36.251453 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:38.811651 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:43.931888 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 10:52:54.172037 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-952903 --driver=docker  --container-runtime=docker: (33.654427946s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-955546 --driver=docker  --container-runtime=docker
E1002 10:53:14.652477 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-955546 --driver=docker  --container-runtime=docker: (33.862245234s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-952903
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-955546
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-955546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-955546
E1002 10:53:35.508919 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-955546: (2.08949352s)
helpers_test.go:175: Cleaning up "first-952903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-952903
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-952903: (2.145325074s)
--- PASS: TestMinikubeProfile (73.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-629534 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-629534 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.580900414s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-629534 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-631561 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E1002 10:53:55.613597 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-631561 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.573872885s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-631561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-629534 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-629534 --alsologtostderr -v=5: (1.530298292s)
--- PASS: TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-631561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-631561
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-631561: (1.21907352s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-631561
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-631561: (7.078587944s)
--- PASS: TestMountStart/serial/RestartStopped (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-631561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-899833 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1002 10:54:20.137413 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:54:47.821335 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 10:55:17.534450 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-899833 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.518874312s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (42.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-899833 -- rollout status deployment/busybox: (3.522507046s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-n7gl6 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-wzmtg -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-n7gl6 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-wzmtg -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-n7gl6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-wzmtg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (42.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-n7gl6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-n7gl6 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-wzmtg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-899833 -- exec busybox-5bc68d56bd-wzmtg -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-899833 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-899833 -v 3 --alsologtostderr: (17.318866454s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.32s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp testdata/cp-test.txt multinode-899833:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2344565154/001/cp-test_multinode-899833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833:/home/docker/cp-test.txt multinode-899833-m02:/home/docker/cp-test_multinode-899833_multinode-899833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m02 "sudo cat /home/docker/cp-test_multinode-899833_multinode-899833-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833:/home/docker/cp-test.txt multinode-899833-m03:/home/docker/cp-test_multinode-899833_multinode-899833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m03 "sudo cat /home/docker/cp-test_multinode-899833_multinode-899833-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp testdata/cp-test.txt multinode-899833-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2344565154/001/cp-test_multinode-899833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833-m02:/home/docker/cp-test.txt multinode-899833:/home/docker/cp-test_multinode-899833-m02_multinode-899833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833 "sudo cat /home/docker/cp-test_multinode-899833-m02_multinode-899833.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833-m02:/home/docker/cp-test.txt multinode-899833-m03:/home/docker/cp-test_multinode-899833-m02_multinode-899833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m03 "sudo cat /home/docker/cp-test_multinode-899833-m02_multinode-899833-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp testdata/cp-test.txt multinode-899833-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2344565154/001/cp-test_multinode-899833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833-m03:/home/docker/cp-test.txt multinode-899833:/home/docker/cp-test_multinode-899833-m03_multinode-899833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833 "sudo cat /home/docker/cp-test_multinode-899833-m03_multinode-899833.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 cp multinode-899833-m03:/home/docker/cp-test.txt multinode-899833-m02:/home/docker/cp-test_multinode-899833-m03_multinode-899833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 ssh -n multinode-899833-m02 "sudo cat /home/docker/cp-test_multinode-899833-m03_multinode-899833-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-899833 node stop m03: (1.261491718s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-899833 status: exit status 7 (600.307572ms)

                                                
                                                
-- stdout --
	multinode-899833
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-899833-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-899833-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr: exit status 7 (581.193075ms)

                                                
                                                
-- stdout --
	multinode-899833
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-899833-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-899833-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 10:56:49.675616 2245525 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:56:49.675867 2245525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:56:49.675897 2245525 out.go:309] Setting ErrFile to fd 2...
	I1002 10:56:49.675918 2245525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:56:49.676221 2245525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 10:56:49.676508 2245525 out.go:303] Setting JSON to false
	I1002 10:56:49.676691 2245525 mustload.go:65] Loading cluster: multinode-899833
	I1002 10:56:49.676696 2245525 notify.go:220] Checking for updates...
	I1002 10:56:49.677308 2245525 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 10:56:49.677350 2245525 status.go:255] checking status of multinode-899833 ...
	I1002 10:56:49.677928 2245525 cli_runner.go:164] Run: docker container inspect multinode-899833 --format={{.State.Status}}
	I1002 10:56:49.699217 2245525 status.go:330] multinode-899833 host status = "Running" (err=<nil>)
	I1002 10:56:49.699259 2245525 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:56:49.699559 2245525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833
	I1002 10:56:49.720408 2245525 host.go:66] Checking if "multinode-899833" exists ...
	I1002 10:56:49.720734 2245525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:56:49.720781 2245525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833
	I1002 10:56:49.752387 2245525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35570 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833/id_rsa Username:docker}
	I1002 10:56:49.847743 2245525 ssh_runner.go:195] Run: systemctl --version
	I1002 10:56:49.853305 2245525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:56:49.867282 2245525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 10:56:49.946374 2245525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 10:56:49.936379542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 10:56:49.947155 2245525 kubeconfig.go:92] found "multinode-899833" server: "https://192.168.58.2:8443"
	I1002 10:56:49.947181 2245525 api_server.go:166] Checking apiserver status ...
	I1002 10:56:49.947234 2245525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:56:49.960848 2245525 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2202/cgroup
	I1002 10:56:49.973506 2245525 api_server.go:182] apiserver freezer: "5:freezer:/docker/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0/kubepods/burstable/pod6b8321b57953ac8c68ccd1f025f1ab0e/1bdae6fab8f9d35f77968c9cea04f1c2daf8a0138cd0a9e1b69bedadaad89f71"
	I1002 10:56:49.973586 2245525 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1e76ac47762cab5b5da0c5271ec5cab4d917a0f9ea9ea2e9d271ee6fac780cb0/kubepods/burstable/pod6b8321b57953ac8c68ccd1f025f1ab0e/1bdae6fab8f9d35f77968c9cea04f1c2daf8a0138cd0a9e1b69bedadaad89f71/freezer.state
	I1002 10:56:49.984856 2245525 api_server.go:204] freezer state: "THAWED"
	I1002 10:56:49.984892 2245525 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 10:56:49.994425 2245525 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 10:56:49.994465 2245525 status.go:421] multinode-899833 apiserver status = Running (err=<nil>)
	I1002 10:56:49.994511 2245525 status.go:257] multinode-899833 status: &{Name:multinode-899833 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 10:56:49.994554 2245525 status.go:255] checking status of multinode-899833-m02 ...
	I1002 10:56:49.994935 2245525 cli_runner.go:164] Run: docker container inspect multinode-899833-m02 --format={{.State.Status}}
	I1002 10:56:50.015794 2245525 status.go:330] multinode-899833-m02 host status = "Running" (err=<nil>)
	I1002 10:56:50.015829 2245525 host.go:66] Checking if "multinode-899833-m02" exists ...
	I1002 10:56:50.016197 2245525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899833-m02
	I1002 10:56:50.039521 2245525 host.go:66] Checking if "multinode-899833-m02" exists ...
	I1002 10:56:50.039853 2245525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 10:56:50.039903 2245525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899833-m02
	I1002 10:56:50.060623 2245525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35575 SSHKeyPath:/home/jenkins/minikube-integration/17340-2134307/.minikube/machines/multinode-899833-m02/id_rsa Username:docker}
	I1002 10:56:50.159862 2245525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:56:50.174405 2245525 status.go:257] multinode-899833-m02 status: &{Name:multinode-899833-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 10:56:50.174451 2245525 status.go:255] checking status of multinode-899833-m03 ...
	I1002 10:56:50.174764 2245525 cli_runner.go:164] Run: docker container inspect multinode-899833-m03 --format={{.State.Status}}
	I1002 10:56:50.197071 2245525 status.go:330] multinode-899833-m03 host status = "Stopped" (err=<nil>)
	I1002 10:56:50.197097 2245525 status.go:343] host is not running, skipping remaining checks
	I1002 10:56:50.197105 2245525 status.go:257] multinode-899833-m03 status: &{Name:multinode-899833-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-899833 node start m03 --alsologtostderr: (12.768481792s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-899833 node delete m03: (4.418757083s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-899833 stop: (21.61527408s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-899833 status: exit status 7 (95.051372ms)

                                                
                                                
-- stdout --
	multinode-899833
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-899833-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr: exit status 7 (88.38877ms)

                                                
                                                
-- stdout --
	multinode-899833
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-899833-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:02:02.802038 2266223 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:02:02.802169 2266223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:02:02.802177 2266223 out.go:309] Setting ErrFile to fd 2...
	I1002 11:02:02.802183 2266223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:02:02.802432 2266223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-2134307/.minikube/bin
	I1002 11:02:02.802700 2266223 out.go:303] Setting JSON to false
	I1002 11:02:02.802793 2266223 mustload.go:65] Loading cluster: multinode-899833
	I1002 11:02:02.802879 2266223 notify.go:220] Checking for updates...
	I1002 11:02:02.803313 2266223 config.go:182] Loaded profile config "multinode-899833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 11:02:02.803328 2266223 status.go:255] checking status of multinode-899833 ...
	I1002 11:02:02.803825 2266223 cli_runner.go:164] Run: docker container inspect multinode-899833 --format={{.State.Status}}
	I1002 11:02:02.824177 2266223 status.go:330] multinode-899833 host status = "Stopped" (err=<nil>)
	I1002 11:02:02.824198 2266223 status.go:343] host is not running, skipping remaining checks
	I1002 11:02:02.824208 2266223 status.go:257] multinode-899833 status: &{Name:multinode-899833 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 11:02:02.824251 2266223 status.go:255] checking status of multinode-899833-m02 ...
	I1002 11:02:02.824779 2266223 cli_runner.go:164] Run: docker container inspect multinode-899833-m02 --format={{.State.Status}}
	I1002 11:02:02.843573 2266223 status.go:330] multinode-899833-m02 host status = "Stopped" (err=<nil>)
	I1002 11:02:02.843598 2266223 status.go:343] host is not running, skipping remaining checks
	I1002 11:02:02.843606 2266223 status.go:257] multinode-899833-m02 status: &{Name:multinode-899833-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-899833 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1002 11:02:33.691943 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-899833 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m29.236871397s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-899833 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-899833
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-899833-m02 --driver=docker  --container-runtime=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-899833-m02 --driver=docker  --container-runtime=docker: exit status 14 (92.921774ms)

                                                
                                                
-- stdout --
	* [multinode-899833-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-899833-m02' is duplicated with machine name 'multinode-899833-m02' in profile 'multinode-899833'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-899833-m03 --driver=docker  --container-runtime=docker
E1002 11:03:35.508992 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-899833-m03 --driver=docker  --container-runtime=docker: (37.796926444s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-899833
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-899833: exit status 80 (356.693084ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-899833
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-899833-m03 already exists in multinode-899833-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-899833-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-899833-m03: (2.13823555s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.45s)

                                                
                                    
x
+
TestPreload (131.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-167235 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1002 11:04:20.136850 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-167235 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m2.892567272s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-167235 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-167235 image pull gcr.io/k8s-minikube/busybox: (1.496988389s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-167235
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-167235: (10.940198961s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-167235 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1002 11:05:43.181667 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-167235 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (54.22120674s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-167235 image list
helpers_test.go:175: Cleaning up "test-preload-167235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-167235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-167235: (2.217518553s)
--- PASS: TestPreload (131.99s)

                                                
                                    
x
+
TestScheduledStopUnix (106.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-317975 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-317975 --memory=2048 --driver=docker  --container-runtime=docker: (32.774462734s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-317975 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-317975 -n scheduled-stop-317975
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-317975 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-317975 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-317975 -n scheduled-stop-317975
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-317975
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-317975 --schedule 15s
E1002 11:07:33.691826 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-317975
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-317975: exit status 7 (69.071902ms)

                                                
                                                
-- stdout --
	scheduled-stop-317975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-317975 -n scheduled-stop-317975
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-317975 -n scheduled-stop-317975: exit status 7 (68.41671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-317975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-317975
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-317975: (1.628739634s)
--- PASS: TestScheduledStopUnix (106.05s)

                                                
                                    
x
+
TestSkaffold (108.52s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3401569045 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-513062 --memory=2600 --driver=docker  --container-runtime=docker
E1002 11:08:35.509163 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-513062 --memory=2600 --driver=docker  --container-runtime=docker: (32.48073374s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3401569045 run --minikube-profile skaffold-513062 --kube-context skaffold-513062 --status-check=true --port-forward=false --interactive=false
E1002 11:08:56.736384 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:09:20.137422 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3401569045 run --minikube-profile skaffold-513062 --kube-context skaffold-513062 --status-check=true --port-forward=false --interactive=false: (1m0.98917434s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6c7ff6f9f8-7dlkl" [b637d1c6-3ad2-4195-8766-f8c831fb1971] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.026753091s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7f47b8d88b-cdmvj" [3b9268d7-79fc-46b6-a923-80427497ecaf] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010012316s
helpers_test.go:175: Cleaning up "skaffold-513062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-513062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-513062: (2.947129459s)
--- PASS: TestSkaffold (108.52s)

                                                
                                    
x
+
TestInsufficientStorage (14.69s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-266146 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-266146 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (12.313118612s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d5072fa-e8c4-45dd-91e6-c51e3e94f20c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-266146] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9da6d9d-69cc-418b-bcae-c93d18f7f987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"b3d4027a-242a-4500-9ead-d9a0661ed3c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5eb8b844-370a-476c-a89f-31b29a366492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig"}}
	{"specversion":"1.0","id":"9ea2e97a-27dd-4e27-b104-07797ec5d0fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube"}}
	{"specversion":"1.0","id":"c19b89bf-70a8-4282-9de1-7ee8023abb30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"acd5e19f-19f8-435e-b83a-1c08c720a111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3abf6ac4-c157-4d9f-bf07-a35ceeb252d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"612644c4-320b-47f1-a99c-fa6f084c3852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"77b545c2-edfc-4e00-975a-521b111305df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5544f1d6-1d0f-4ccc-a6af-7c43c7ea38eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0655e471-5ce7-433e-aba0-20c5abea6522","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-266146 in cluster insufficient-storage-266146","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"69289bf9-da51-4292-8663-af6ea3d24cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f6353c8-9076-4a39-9bb0-ec7f0116867b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c9f9b1f-0097-422e-b5d2-0643b6a69d2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-266146 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-266146 --output=json --layout=cluster: exit status 7 (319.446818ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-266146","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-266146","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:10:19.120137 2302241 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-266146" does not appear in /home/jenkins/minikube-integration/17340-2134307/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-266146 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-266146 --output=json --layout=cluster: exit status 7 (307.181608ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-266146","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-266146","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:10:19.429190 2302294 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-266146" does not appear in /home/jenkins/minikube-integration/17340-2134307/kubeconfig
	E1002 11:10:19.441583 2302294 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/insufficient-storage-266146/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-266146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-266146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-266146: (1.744729258s)
--- PASS: TestInsufficientStorage (14.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (105.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3061911233.exe start -p running-upgrade-165863 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1002 11:17:33.692422 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:17:37.348048 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3061911233.exe start -p running-upgrade-165863 --memory=2200 --vm-driver=docker  --container-runtime=docker: (57.104896683s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-165863 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-165863 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.761915468s)
helpers_test.go:175: Cleaning up "running-upgrade-165863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-165863
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-165863: (2.462858815s)
--- PASS: TestRunningBinaryUpgrade (105.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (426.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 11:12:33.692115 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m15.074142052s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-584658
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-584658: (10.382165298s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-584658 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-584658 status --format={{.Host}}: exit status 7 (120.851836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 11:13:35.508888 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m54.898191614s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-584658 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (75.307966ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-584658] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-584658
	    minikube start -p kubernetes-upgrade-584658 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5846582 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-584658 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-584658 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.30972726s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-584658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-584658
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-584658: (2.259985386s)
--- PASS: TestKubernetesUpgrade (426.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (197.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3214525702.exe start -p missing-upgrade-191283 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3214525702.exe start -p missing-upgrade-191283 --memory=2200 --driver=docker  --container-runtime=docker: (2m4.154380235s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-191283
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-191283: (10.448234093s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-191283
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-191283 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 11:14:20.137306 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-191283 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m0.162421646s)
helpers_test.go:175: Cleaning up "missing-upgrade-191283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-191283
E1002 11:14:53.504355 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:53.509772 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:53.520013 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:53.540318 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:53.580578 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:53.660924 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:53.821413 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:54.142415 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:14:54.782930 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-191283: (2.269243168s)
--- PASS: TestMissingContainerUpgrade (197.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-362198 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-362198 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (96.855919ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-362198] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-2134307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-2134307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-362198 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-362198 --driver=docker  --container-runtime=docker: (44.428940553s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-362198 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-362198 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-362198 --no-kubernetes --driver=docker  --container-runtime=docker: (6.102270406s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-362198 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-362198 status -o json: exit status 2 (375.133078ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-362198","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-362198
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-362198: (1.76463546s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-362198 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-362198 --no-kubernetes --driver=docker  --container-runtime=docker: (11.086394349s)
--- PASS: TestNoKubernetes/serial/Start (11.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-362198 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-362198 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.716962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-362198
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-362198: (1.271367709s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-362198 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-362198 --driver=docker  --container-runtime=docker: (7.304792422s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-362198 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-362198 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.743335ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1002 11:14:56.063419 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3600077362.exe start -p stopped-upgrade-683309 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1002 11:14:58.624369 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:15:03.744558 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:15:13.985665 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:15:34.466687 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.3600077362.exe start -p stopped-upgrade-683309 --memory=2200 --vm-driver=docker  --container-runtime=docker: (56.256406786s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.3600077362.exe -p stopped-upgrade-683309 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.3600077362.exe -p stopped-upgrade-683309 stop: (10.804580521s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-683309 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 11:16:15.426871 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-683309 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.085793491s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-683309
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-683309: (1.925145322s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.93s)

                                                
                                    
x
+
TestPause/serial/Start (93.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-994778 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1002 11:18:35.509656 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-994778 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m33.637620689s)
--- PASS: TestPause/serial/Start (93.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-994778 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 11:20:21.189024 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-994778 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.848465477s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.87s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-994778 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-994778 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-994778 --output=json --layout=cluster: exit status 2 (400.544687ms)

                                                
                                                
-- stdout --
	{"Name":"pause-994778","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-994778","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-994778 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-994778 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-994778 --alsologtostderr -v=5: (1.026099979s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-994778 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-994778 --alsologtostderr -v=5: (2.448643945s)
--- PASS: TestPause/serial/DeletePaused (2.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-994778
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-994778: exit status 1 (23.504075ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-994778: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (141.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-736992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1002 11:22:23.181997 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 11:22:33.692861 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:23:35.509244 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-736992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m21.973063152s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (141.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-736992 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca0733ae-a28b-49b6-8bd1-7cf86ca977c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca0733ae-a28b-49b6-8bd1-7cf86ca977c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.046869195s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-736992 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.936152762s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-736992 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-736992 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-736992 --alsologtostderr -v=3: (11.156420971s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-929502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1002 11:24:53.503353 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-929502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (1m46.082567073s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736992 -n old-k8s-version-736992
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736992 -n old-k8s-version-736992: exit status 7 (100.57553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-736992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (405.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-736992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1002 11:25:36.736586 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-736992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m45.367219418s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736992 -n old-k8s-version-736992
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (405.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-929502 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [24bc3cdb-4c90-4e93-98ad-b53a692bdb86] Pending
helpers_test.go:344: "busybox" [24bc3cdb-4c90-4e93-98ad-b53a692bdb86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [24bc3cdb-4c90-4e93-98ad-b53a692bdb86] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.029664315s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-929502 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-929502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-929502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097436723s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-929502 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-929502 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-929502 --alsologtostderr -v=3: (11.129508805s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-929502 -n no-preload-929502
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-929502 -n no-preload-929502: exit status 7 (68.634819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-929502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (327.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-929502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1002 11:27:33.691962 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:28:35.509702 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 11:29:20.137175 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 11:29:53.504312 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:31:16.549414 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-929502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (5m26.811976184s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-929502 -n no-preload-929502
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (327.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-v5258" [6eb67709-a83d-49f8-8ccb-13d06203acd1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.147063882s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-v5258" [6eb67709-a83d-49f8-8ccb-13d06203acd1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012077288s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-736992 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-736992 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-736992 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-736992 --alsologtostderr -v=1: (1.169888925s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736992 -n old-k8s-version-736992
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736992 -n old-k8s-version-736992: exit status 2 (540.45616ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-736992 -n old-k8s-version-736992
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-736992 -n old-k8s-version-736992: exit status 2 (556.708981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-736992 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-736992 --alsologtostderr -v=1: (1.039377913s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736992 -n old-k8s-version-736992
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-736992 -n old-k8s-version-736992
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-764454 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-764454 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (1m34.385411717s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9n29t" [00ef5b1b-9b68-462b-b350-4dbfb9eb9a51] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 11:32:33.692114 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9n29t" [00ef5b1b-9b68-462b-b350-4dbfb9eb9a51] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.035573466s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9n29t" [00ef5b1b-9b68-462b-b350-4dbfb9eb9a51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011242481s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-929502 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-929502 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-929502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-929502 -n no-preload-929502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-929502 -n no-preload-929502: exit status 2 (399.839624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-929502 -n no-preload-929502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-929502 -n no-preload-929502: exit status 2 (395.783032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-929502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-929502 -n no-preload-929502
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-929502 -n no-preload-929502
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-105131 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1002 11:33:18.557585 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-105131 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (1m26.46802895s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-764454 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6acf45c1-8a1b-4d6d-b0e3-2d749ac8b347] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 11:33:35.509718 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6acf45c1-8a1b-4d6d-b0e3-2d749ac8b347] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.032793449s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-764454 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-764454 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-764454 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.118120035s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-764454 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-764454 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-764454 --alsologtostderr -v=3: (11.09203781s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-764454 -n embed-certs-764454
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-764454 -n embed-certs-764454: exit status 7 (72.450112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-764454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (320.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-764454 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1002 11:34:20.137430 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-764454 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (5m20.187038474s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-764454 -n embed-certs-764454
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (320.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-105131 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2fbb36e-187d-47d8-a094-1e7f95baa78f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2fbb36e-187d-47d8-a094-1e7f95baa78f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.060580079s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-105131 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-105131 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-105131 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012734675s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-105131 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-105131 --alsologtostderr -v=3
E1002 11:34:32.485360 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:32.490600 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:32.500839 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:32.521076 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:32.561324 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:32.641539 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:32.801702 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:33.122264 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:33.763387 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:35.043635 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:37.604653 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:42.724879 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-105131 --alsologtostderr -v=3: (10.948656771s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131: exit status 7 (71.425166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-105131 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-105131 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1002 11:34:52.965508 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:34:53.503861 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:35:13.445864 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:35:54.406554 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:36:35.372296 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:35.377528 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:35.387806 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:35.408035 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:35.448303 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:35.528799 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:35.689218 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:36.009654 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:36.649959 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:37.930506 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:40.490889 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:45.611526 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:36:55.852190 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:37:16.327322 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
E1002 11:37:16.332488 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:37:33.692564 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:37:57.293371 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:38:35.509370 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 11:39:03.182635 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-105131 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (5m52.644311888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dd7cp" [ea4f8e3d-b086-4263-af28-c14269858e68] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 11:39:19.214459 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:39:20.136806 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dd7cp" [ea4f8e3d-b086-4263-af28-c14269858e68] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.035248471s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dd7cp" [ea4f8e3d-b086-4263-af28-c14269858e68] Running
E1002 11:39:32.485805 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012214061s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-764454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-764454 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-764454 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-764454 -n embed-certs-764454
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-764454 -n embed-certs-764454: exit status 2 (354.729503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-764454 -n embed-certs-764454
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-764454 -n embed-certs-764454: exit status 2 (384.619085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-764454 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-764454 -n embed-certs-764454
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-764454 -n embed-certs-764454
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-114626 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1002 11:39:53.504301 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
E1002 11:40:00.176907 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-114626 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (56.914029504s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ghknt" [c61e2ade-e787-4a37-97bb-83ab20ebcf12] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ghknt" [c61e2ade-e787-4a37-97bb-83ab20ebcf12] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.03907347s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-114626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-114626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.820984318s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-114626 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-114626 --alsologtostderr -v=3: (8.206613171s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-114626 -n newest-cni-114626
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-114626 -n newest-cni-114626: exit status 7 (80.677619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-114626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-114626 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-114626 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (36.404017303s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-114626 -n newest-cni-114626
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ghknt" [c61e2ade-e787-4a37-97bb-83ab20ebcf12] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01138765s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-105131 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-105131 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-105131 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131: exit status 2 (377.009947ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131: exit status 2 (356.401368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-105131 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-105131 -n default-k8s-diff-port-105131
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m38.688668429s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-114626 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-114626 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-114626 -n newest-cni-114626
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-114626 -n newest-cni-114626: exit status 2 (421.935505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-114626 -n newest-cni-114626
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-114626 -n newest-cni-114626: exit status 2 (437.958641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-114626 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-114626 -n newest-cni-114626
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-114626 -n newest-cni-114626
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.25s)
E1002 11:49:32.485683 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E1002 11:41:35.371916 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:42:03.054660 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
E1002 11:42:16.737732 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:42:33.691954 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m10.674902584s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-chdv7" [1f226cce-b6c0-4e46-b34f-f4b7c97acdbc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.030235475s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zsm4r" [b52cffc7-fdba-43db-8239-123bd3a3e053] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zsm4r" [b52cffc7-fdba-43db-8239-123bd3a3e053] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.012632663s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6ps2h" [4b75416d-f2bd-4042-b791-200136569bb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6ps2h" [4b75416d-f2bd-4042-b791-200136569bb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.015677222s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m27.484592445s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1002 11:43:35.509696 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/addons-358443/client.crt: no such file or directory
E1002 11:44:20.136512 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/functional-499029/client.crt: no such file or directory
E1002 11:44:21.629826 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:21.635031 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:21.645311 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:21.665539 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:21.705733 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:21.785928 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:21.946287 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:22.267186 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:22.907847 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:24.189001 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:26.750104 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:31.871226 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
E1002 11:44:32.486210 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/old-k8s-version-736992/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m10.461101608s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pbnzk" [2a0c3850-baaa-44ec-89da-e2f63f7516ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 11:44:42.111950 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-pbnzk" [2a0c3850-baaa-44ec-89da-e2f63f7516ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.014969359s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6zzfn" [30b4cce9-73eb-46d9-85a0-3e0230545441] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.031400649s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4czfv" [73188679-cf66-42f5-9dc5-12ce30326715] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4czfv" [73188679-cf66-42f5-9dc5-12ce30326715] Running
E1002 11:45:02.592597 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.012807298s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (60.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m0.318001411s)
--- PASS: TestNetworkPlugins/group/false/Start (60.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1002 11:45:43.553371 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (59.128723872s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-82zrz" [cf7bc959-0d53-441a-b133-5606e8dbb4d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-82zrz" [cf7bc959-0d53-441a-b133-5606e8dbb4d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.010809171s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-367987 replace --force -f testdata/netcat-deployment.yaml
E1002 11:46:35.373616 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/no-preload-929502/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hmpb8" [cf737a48-3133-4c1a-a9e7-5a16fe326a97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hmpb8" [cf737a48-3133-4c1a-a9e7-5a16fe326a97] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010303272s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E1002 11:47:05.473489 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/default-k8s-diff-port-105131/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m8.965846549s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1002 11:47:33.692209 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/ingress-addon-legacy-566627/client.crt: no such file or directory
E1002 11:47:43.217988 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.223222 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.233463 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.253683 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.293902 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.374154 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.535011 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:43.855487 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:44.496493 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:45.286452 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.291801 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.302039 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.322749 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.364005 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.445348 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.606204 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:45.777590 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:45.927004 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:46.567840 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:47.848667 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:48.337802 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:50.409520 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:53.458452 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
E1002 11:47:55.530308 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
E1002 11:47:56.550289 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/skaffold-513062/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m35.109650009s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-x698h" [e64f769b-f84e-40cf-9764-16f3c64589a0] Running
E1002 11:48:03.699256 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/kindnet-367987/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.02815739s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-klrsf" [315b3052-e7db-4c79-9426-e0dd237b06e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 11:48:05.771265 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/auto-367987/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-klrsf" [315b3052-e7db-4c79-9426-e0dd237b06e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011060688s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (56.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-367987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (56.428609368s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (56.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8xssv" [ca8a4809-a7e1-4e97-a2ea-582466af229a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8xssv" [ca8a4809-a7e1-4e97-a2ea-582466af229a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.012156481s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-367987 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-367987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ndjjp" [65293037-7152-4fd3-9cac-76c0c4b97856] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 11:49:37.585432 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:37.590728 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:37.600992 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:37.621329 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:37.661591 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:37.741806 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:37.902171 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:38.222703 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:38.862879 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:40.143489 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ndjjp" [65293037-7152-4fd3-9cac-76c0c4b97856] Running
E1002 11:49:42.704059 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
E1002 11:49:47.824583 2139700 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/custom-flannel-367987/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.009238273s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-367987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-367987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                    

Test skip (24/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.61s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-213108 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-213108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-213108
--- SKIP: TestDownloadOnlyKic (0.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:422: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-821483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-821483
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-367987 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-367987" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17340-2134307/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Oct 2023 11:19:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: pause-994778
contexts:
- context:
cluster: pause-994778
extensions:
- extension:
last-update: Mon, 02 Oct 2023 11:19:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-994778
name: pause-994778
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-994778
user:
client-certificate: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/pause-994778/client.crt
client-key: /home/jenkins/minikube-integration/17340-2134307/.minikube/profiles/pause-994778/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-367987

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-367987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-367987"

                                                
                                                
----------------------- debugLogs end: cilium-367987 [took: 3.88755329s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-367987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-367987
--- SKIP: TestNetworkPlugins/group/cilium (4.05s)

                                                
                                    
Copied to clipboard