Test Report: Docker_Linux_crio_arm64 17585

                    
                      ea770f64c27c5646b2ec1dfcd286218478f671de:2023-11-08:31788
                    
                

Test fail (7/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 169.67
107 TestFunctional/parallel/License 0.3
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.78
209 TestMultiNode/serial/PingHostFrom2Pods 4.71
230 TestRunningBinaryUpgrade 73.42
233 TestMissingContainerUpgrade 187.1
245 TestStoppedBinaryUpgrade/Upgrade 82.59
x
+
TestAddons/parallel/Ingress (169.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-862145 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-862145 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-862145 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0aa54766-1335-440e-8513-b42dde25a678] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0aa54766-1335-440e-8513-b42dde25a678] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013960386s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-862145 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.621945688s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-862145 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.061020088s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-862145 addons disable ingress-dns --alsologtostderr -v=1: (1.387524989s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-862145 addons disable ingress --alsologtostderr -v=1: (7.784889597s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-862145
helpers_test.go:235: (dbg) docker inspect addons-862145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "967d7b488671874e6a89a7ddc274d97f964358ed6a9c9849166c970160ffdbb0",
	        "Created": "2023-11-07T23:30:27.435480949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1455989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:30:27.805781597Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/967d7b488671874e6a89a7ddc274d97f964358ed6a9c9849166c970160ffdbb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/967d7b488671874e6a89a7ddc274d97f964358ed6a9c9849166c970160ffdbb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/967d7b488671874e6a89a7ddc274d97f964358ed6a9c9849166c970160ffdbb0/hosts",
	        "LogPath": "/var/lib/docker/containers/967d7b488671874e6a89a7ddc274d97f964358ed6a9c9849166c970160ffdbb0/967d7b488671874e6a89a7ddc274d97f964358ed6a9c9849166c970160ffdbb0-json.log",
	        "Name": "/addons-862145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-862145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-862145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7d89310e34b5ea52f3d642df71f2e8b88a4e6cce25f01bb0a47f46428d5c2172-init/diff:/var/lib/docker/overlay2/8e491d7cb3241f95e04087f3d63eb57f6d89d07f6c4a9f8c41570cc55f16b330/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d89310e34b5ea52f3d642df71f2e8b88a4e6cce25f01bb0a47f46428d5c2172/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d89310e34b5ea52f3d642df71f2e8b88a4e6cce25f01bb0a47f46428d5c2172/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d89310e34b5ea52f3d642df71f2e8b88a4e6cce25f01bb0a47f46428d5c2172/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-862145",
	                "Source": "/var/lib/docker/volumes/addons-862145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-862145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-862145",
	                "name.minikube.sigs.k8s.io": "addons-862145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b185840b156848e7b4597efbac7b98b6eb71302ca417ef37db529a2bc1ecde93",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34067"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34065"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b185840b1568",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-862145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "967d7b488671",
	                        "addons-862145"
	                    ],
	                    "NetworkID": "53a72c3e33ba33b08af825761c03722e0c5862b49461348f512d5726e3355723",
	                    "EndpointID": "b81686542c9a2bdb526f0c4dc29522b318d353b0ab86eb676b279f7865053f60",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-862145 -n addons-862145
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-862145 logs -n 25: (1.577612259s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC | 07 Nov 23 23:30 UTC |
	| delete  | -p download-only-073722                                                                     | download-only-073722   | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC | 07 Nov 23 23:30 UTC |
	| delete  | -p download-only-073722                                                                     | download-only-073722   | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC | 07 Nov 23 23:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-028350 | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC |                     |
	|         | download-docker-028350                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-028350                                                                   | download-docker-028350 | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC | 07 Nov 23 23:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-523429   | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC |                     |
	|         | binary-mirror-523429                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43481                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-523429                                                                     | binary-mirror-523429   | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC | 07 Nov 23 23:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC |                     |
	|         | addons-862145                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC |                     |
	|         | addons-862145                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-862145 --wait=true                                                                | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:30 UTC | 07 Nov 23 23:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	|         | -p addons-862145                                                                            |                        |         |         |                     |                     |
	| ip      | addons-862145 ip                                                                            | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	| addons  | addons-862145 addons disable                                                                | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-862145 ssh cat                                                                       | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	|         | /opt/local-path-provisioner/pvc-1bd0fe32-732d-473f-9e2b-8ba652c5c557_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-862145 addons disable                                                                | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	|         | addons-862145                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:33 UTC | 07 Nov 23 23:33 UTC |
	|         | -p addons-862145                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-862145 addons                                                                        | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:34 UTC | 07 Nov 23 23:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-862145 addons                                                                        | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:34 UTC | 07 Nov 23 23:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:34 UTC | 07 Nov 23 23:34 UTC |
	|         | addons-862145                                                                               |                        |         |         |                     |                     |
	| addons  | addons-862145 addons                                                                        | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:34 UTC | 07 Nov 23 23:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-862145 ssh curl -s                                                                   | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-862145 ip                                                                            | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	| addons  | addons-862145 addons disable                                                                | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-862145 addons disable                                                                | addons-862145          | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:30:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:30:03.139450 1455521 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:30:03.139665 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:30:03.139698 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:30:03.139720 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:30:03.140049 1455521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:30:03.140538 1455521 out.go:303] Setting JSON to false
	I1107 23:30:03.141682 1455521 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22353,"bootTime":1699377451,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:30:03.141800 1455521 start.go:138] virtualization:  
	I1107 23:30:03.144436 1455521 out.go:177] * [addons-862145] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:30:03.146605 1455521 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:30:03.148354 1455521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:30:03.146778 1455521 notify.go:220] Checking for updates...
	I1107 23:30:03.151756 1455521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:30:03.154025 1455521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:30:03.155862 1455521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:30:03.157920 1455521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:30:03.160004 1455521 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:30:03.184000 1455521 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:30:03.184113 1455521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:30:03.262214 1455521 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:30:03.251609007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:30:03.262332 1455521 docker.go:295] overlay module found
	I1107 23:30:03.264362 1455521 out.go:177] * Using the docker driver based on user configuration
	I1107 23:30:03.266327 1455521 start.go:298] selected driver: docker
	I1107 23:30:03.266352 1455521 start.go:902] validating driver "docker" against <nil>
	I1107 23:30:03.266383 1455521 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:30:03.267011 1455521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:30:03.336880 1455521 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:30:03.326945235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:30:03.337063 1455521 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:30:03.337313 1455521 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:30:03.339703 1455521 out.go:177] * Using Docker driver with root privileges
	I1107 23:30:03.341847 1455521 cni.go:84] Creating CNI manager for ""
	I1107 23:30:03.341877 1455521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:30:03.341890 1455521 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:30:03.341909 1455521 start_flags.go:323] config:
	{Name:addons-862145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-862145 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:30:03.344530 1455521 out.go:177] * Starting control plane node addons-862145 in cluster addons-862145
	I1107 23:30:03.346822 1455521 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:30:03.349530 1455521 out.go:177] * Pulling base image ...
	I1107 23:30:03.351744 1455521 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:30:03.351805 1455521 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1107 23:30:03.351819 1455521 cache.go:56] Caching tarball of preloaded images
	I1107 23:30:03.351831 1455521 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:30:03.351902 1455521 preload.go:174] Found /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1107 23:30:03.351912 1455521 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:30:03.352302 1455521 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/config.json ...
	I1107 23:30:03.352339 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/config.json: {Name:mk8f4971a778d3104d56606a89c0c0082048ac69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:03.369380 1455521 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:30:03.369538 1455521 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:30:03.369560 1455521 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 23:30:03.369566 1455521 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 23:30:03.369574 1455521 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:30:03.369579 1455521 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1107 23:30:19.688381 1455521 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1107 23:30:19.688422 1455521 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:30:19.688473 1455521 start.go:365] acquiring machines lock for addons-862145: {Name:mk88de8777e37acbcecd726ac92e9e65c560fc5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:30:19.688594 1455521 start.go:369] acquired machines lock for "addons-862145" in 97.049µs
	I1107 23:30:19.688631 1455521 start.go:93] Provisioning new machine with config: &{Name:addons-862145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-862145 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:30:19.688720 1455521 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:30:19.690810 1455521 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1107 23:30:19.691070 1455521 start.go:159] libmachine.API.Create for "addons-862145" (driver="docker")
	I1107 23:30:19.691108 1455521 client.go:168] LocalClient.Create starting
	I1107 23:30:19.691227 1455521 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem
	I1107 23:30:20.211872 1455521 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem
	I1107 23:30:20.937957 1455521 cli_runner.go:164] Run: docker network inspect addons-862145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:30:20.955291 1455521 cli_runner.go:211] docker network inspect addons-862145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:30:20.955384 1455521 network_create.go:281] running [docker network inspect addons-862145] to gather additional debugging logs...
	I1107 23:30:20.955412 1455521 cli_runner.go:164] Run: docker network inspect addons-862145
	W1107 23:30:20.972180 1455521 cli_runner.go:211] docker network inspect addons-862145 returned with exit code 1
	I1107 23:30:20.972215 1455521 network_create.go:284] error running [docker network inspect addons-862145]: docker network inspect addons-862145: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-862145 not found
	I1107 23:30:20.972233 1455521 network_create.go:286] output of [docker network inspect addons-862145]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-862145 not found
	
	** /stderr **
	I1107 23:30:20.972342 1455521 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:30:20.990131 1455521 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000722a20}
	I1107 23:30:20.990173 1455521 network_create.go:124] attempt to create docker network addons-862145 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 23:30:20.990233 1455521 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-862145 addons-862145
	I1107 23:30:21.071631 1455521 network_create.go:108] docker network addons-862145 192.168.49.0/24 created
	I1107 23:30:21.071668 1455521 kic.go:121] calculated static IP "192.168.49.2" for the "addons-862145" container
	I1107 23:30:21.071741 1455521 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:30:21.090455 1455521 cli_runner.go:164] Run: docker volume create addons-862145 --label name.minikube.sigs.k8s.io=addons-862145 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:30:21.109622 1455521 oci.go:103] Successfully created a docker volume addons-862145
	I1107 23:30:21.109716 1455521 cli_runner.go:164] Run: docker run --rm --name addons-862145-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-862145 --entrypoint /usr/bin/test -v addons-862145:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:30:23.010314 1455521 cli_runner.go:217] Completed: docker run --rm --name addons-862145-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-862145 --entrypoint /usr/bin/test -v addons-862145:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.900553545s)
	I1107 23:30:23.010358 1455521 oci.go:107] Successfully prepared a docker volume addons-862145
	I1107 23:30:23.010393 1455521 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:30:23.010418 1455521 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:30:23.010502 1455521 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-862145:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:30:27.346354 1455521 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-862145:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.335809363s)
	I1107 23:30:27.346387 1455521 kic.go:203] duration metric: took 4.335966 seconds to extract preloaded images to volume
	W1107 23:30:27.346543 1455521 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:30:27.346659 1455521 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:30:27.418684 1455521 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-862145 --name addons-862145 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-862145 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-862145 --network addons-862145 --ip 192.168.49.2 --volume addons-862145:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:30:27.813953 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Running}}
	I1107 23:30:27.837786 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:30:27.863303 1455521 cli_runner.go:164] Run: docker exec addons-862145 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:30:27.923718 1455521 oci.go:144] the created container "addons-862145" has a running status.
	I1107 23:30:27.923747 1455521 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa...
	I1107 23:30:28.549017 1455521 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:30:28.588197 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:30:28.620715 1455521 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:30:28.620738 1455521 kic_runner.go:114] Args: [docker exec --privileged addons-862145 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:30:28.725565 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:30:28.759517 1455521 machine.go:88] provisioning docker machine ...
	I1107 23:30:28.759553 1455521 ubuntu.go:169] provisioning hostname "addons-862145"
	I1107 23:30:28.759621 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:28.785393 1455521 main.go:141] libmachine: Using SSH client type: native
	I1107 23:30:28.785838 1455521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34068 <nil> <nil>}
	I1107 23:30:28.785858 1455521 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-862145 && echo "addons-862145" | sudo tee /etc/hostname
	I1107 23:30:28.961317 1455521 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-862145
	
	I1107 23:30:28.961391 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:28.990320 1455521 main.go:141] libmachine: Using SSH client type: native
	I1107 23:30:28.990714 1455521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34068 <nil> <nil>}
	I1107 23:30:28.990734 1455521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-862145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-862145/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-862145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:30:29.123575 1455521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:30:29.123605 1455521 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1107 23:30:29.123624 1455521 ubuntu.go:177] setting up certificates
	I1107 23:30:29.123633 1455521 provision.go:83] configureAuth start
	I1107 23:30:29.123697 1455521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-862145
	I1107 23:30:29.143102 1455521 provision.go:138] copyHostCerts
	I1107 23:30:29.143191 1455521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1107 23:30:29.143319 1455521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1107 23:30:29.143402 1455521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1107 23:30:29.143527 1455521 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.addons-862145 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-862145]
	I1107 23:30:29.425268 1455521 provision.go:172] copyRemoteCerts
	I1107 23:30:29.425347 1455521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:30:29.425388 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:29.444914 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:30:29.540958 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 23:30:29.569713 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 23:30:29.597886 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:30:29.626150 1455521 provision.go:86] duration metric: configureAuth took 502.503045ms
	I1107 23:30:29.626175 1455521 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:30:29.626367 1455521 config.go:182] Loaded profile config "addons-862145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:30:29.626481 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:29.649152 1455521 main.go:141] libmachine: Using SSH client type: native
	I1107 23:30:29.649583 1455521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34068 <nil> <nil>}
	I1107 23:30:29.649605 1455521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:30:29.900276 1455521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:30:29.900299 1455521 machine.go:91] provisioned docker machine in 1.140756239s
	I1107 23:30:29.900309 1455521 client.go:171] LocalClient.Create took 10.209194575s
	I1107 23:30:29.900326 1455521 start.go:167] duration metric: libmachine.API.Create for "addons-862145" took 10.209257072s
	I1107 23:30:29.900334 1455521 start.go:300] post-start starting for "addons-862145" (driver="docker")
	I1107 23:30:29.900344 1455521 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:30:29.900423 1455521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:30:29.900476 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:29.925345 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:30:30.046681 1455521 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:30:30.052478 1455521 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:30:30.052512 1455521 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:30:30.052531 1455521 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:30:30.052539 1455521 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:30:30.052555 1455521 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1107 23:30:30.052632 1455521 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1107 23:30:30.052657 1455521 start.go:303] post-start completed in 152.315905ms
	I1107 23:30:30.053005 1455521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-862145
	I1107 23:30:30.102777 1455521 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/config.json ...
	I1107 23:30:30.103139 1455521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:30:30.103206 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:30.126157 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:30:30.220973 1455521 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:30:30.228026 1455521 start.go:128] duration metric: createHost completed in 10.539290284s
	I1107 23:30:30.228051 1455521 start.go:83] releasing machines lock for "addons-862145", held for 10.539443489s
	I1107 23:30:30.228131 1455521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-862145
	I1107 23:30:30.247027 1455521 ssh_runner.go:195] Run: cat /version.json
	I1107 23:30:30.247081 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:30.247112 1455521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:30:30.247183 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:30:30.273097 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:30:30.277383 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:30:30.570723 1455521 ssh_runner.go:195] Run: systemctl --version
	I1107 23:30:30.576915 1455521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:30:30.728766 1455521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:30:30.734789 1455521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:30:30.758969 1455521 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:30:30.759042 1455521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:30:30.800508 1455521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:30:30.800534 1455521 start.go:472] detecting cgroup driver to use...
	I1107 23:30:30.800566 1455521 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:30:30.800622 1455521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:30:30.820086 1455521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:30:30.834246 1455521 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:30:30.834313 1455521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:30:30.851387 1455521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:30:30.868990 1455521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:30:30.972295 1455521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:30:31.091117 1455521 docker.go:219] disabling docker service ...
	I1107 23:30:31.091228 1455521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:30:31.115053 1455521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:30:31.130135 1455521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:30:31.232718 1455521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:30:31.341558 1455521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:30:31.355477 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:30:31.375209 1455521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:30:31.375287 1455521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:30:31.388441 1455521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:30:31.388525 1455521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:30:31.401333 1455521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:30:31.414188 1455521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:30:31.425903 1455521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:30:31.436845 1455521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:30:31.447360 1455521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:30:31.457822 1455521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:30:31.552364 1455521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:30:31.674602 1455521 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:30:31.674720 1455521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:30:31.679707 1455521 start.go:540] Will wait 60s for crictl version
	I1107 23:30:31.679811 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:30:31.684393 1455521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:30:31.731445 1455521 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:30:31.731590 1455521 ssh_runner.go:195] Run: crio --version
	I1107 23:30:31.782574 1455521 ssh_runner.go:195] Run: crio --version
	I1107 23:30:31.830232 1455521 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1107 23:30:31.832055 1455521 cli_runner.go:164] Run: docker network inspect addons-862145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:30:31.850281 1455521 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1107 23:30:31.855049 1455521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:30:31.869061 1455521 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:30:31.869136 1455521 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:30:31.936317 1455521 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:30:31.936343 1455521 crio.go:415] Images already preloaded, skipping extraction
	I1107 23:30:31.936414 1455521 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:30:31.978582 1455521 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:30:31.978606 1455521 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:30:31.978692 1455521 ssh_runner.go:195] Run: crio config
	I1107 23:30:32.037995 1455521 cni.go:84] Creating CNI manager for ""
	I1107 23:30:32.038018 1455521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:30:32.038050 1455521 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:30:32.038074 1455521 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-862145 NodeName:addons-862145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:30:32.038214 1455521 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-862145"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:30:32.038271 1455521 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-862145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-862145 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:30:32.038338 1455521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:30:32.049855 1455521 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:30:32.050007 1455521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:30:32.060929 1455521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1107 23:30:32.083065 1455521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:30:32.105110 1455521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1107 23:30:32.126696 1455521 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:30:32.131400 1455521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:30:32.145175 1455521 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145 for IP: 192.168.49.2
	I1107 23:30:32.145205 1455521 certs.go:190] acquiring lock for shared ca certs: {Name:mk4f8465cbc85ba57ebf3be6025d59928913c61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:32.146164 1455521 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key
	I1107 23:30:32.684046 1455521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt ...
	I1107 23:30:32.684088 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt: {Name:mkc4040272e0c90e7860d35ab8d448c0b052328d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:32.684285 1455521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key ...
	I1107 23:30:32.684298 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key: {Name:mk38db9d66aa17cc526c2ab22aa00cd3e3aeab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:32.684389 1455521 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key
	I1107 23:30:32.973114 1455521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt ...
	I1107 23:30:32.973148 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt: {Name:mk254183a9563b83a204e705da70aebfd81cf992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:32.974067 1455521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key ...
	I1107 23:30:32.974086 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key: {Name:mk014b8c357a566964e24a66f5c0066a433916a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:32.974211 1455521 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.key
	I1107 23:30:32.974229 1455521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt with IP's: []
	I1107 23:30:34.043438 1455521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt ...
	I1107 23:30:34.043472 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: {Name:mk81805b6281a6443abbdb49ce3ff7fd543cc73f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:34.043672 1455521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.key ...
	I1107 23:30:34.043695 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.key: {Name:mk7ed0f1cf6b3f240e6c0a811a5aaee4aff87ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:34.043798 1455521 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.key.dd3b5fb2
	I1107 23:30:34.043821 1455521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:30:34.196787 1455521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.crt.dd3b5fb2 ...
	I1107 23:30:34.196818 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.crt.dd3b5fb2: {Name:mkbbeb237271e26d3bac66c525254f7f0aed72a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:34.197702 1455521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.key.dd3b5fb2 ...
	I1107 23:30:34.197722 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.key.dd3b5fb2: {Name:mk3b0c89ffe21c9d797fca729140154f21fe8d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:34.197810 1455521 certs.go:337] copying /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.crt
	I1107 23:30:34.197889 1455521 certs.go:341] copying /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.key
	I1107 23:30:34.197943 1455521 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.key
	I1107 23:30:34.197963 1455521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.crt with IP's: []
	I1107 23:30:34.340377 1455521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.crt ...
	I1107 23:30:34.340405 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.crt: {Name:mkd93fe9192b6d7f28964ff95cbe80de93b20717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:34.340581 1455521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.key ...
	I1107 23:30:34.340596 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.key: {Name:mke55644e6b35129e79ed1ba98b08616589cd7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:30:34.341441 1455521 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:30:34.341497 1455521 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:30:34.341532 1455521 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:30:34.341561 1455521 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem (1675 bytes)
	I1107 23:30:34.342186 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:30:34.372228 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:30:34.400998 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:30:34.430149 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:30:34.458606 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:30:34.487097 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:30:34.515857 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:30:34.545127 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:30:34.574513 1455521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:30:34.602935 1455521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:30:34.623825 1455521 ssh_runner.go:195] Run: openssl version
	I1107 23:30:34.630871 1455521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:30:34.642519 1455521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:30:34.647032 1455521 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:30 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:30:34.647109 1455521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:30:34.655568 1455521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:30:34.667448 1455521 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:30:34.671765 1455521 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:30:34.671832 1455521 kubeadm.go:404] StartCluster: {Name:addons-862145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-862145 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:30:34.671951 1455521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:30:34.672012 1455521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:30:34.716985 1455521 cri.go:89] found id: ""
	I1107 23:30:34.717069 1455521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:30:34.727746 1455521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:30:34.738199 1455521 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:30:34.738314 1455521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:30:34.748919 1455521 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:30:34.748962 1455521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:30:34.802535 1455521 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:30:34.803060 1455521 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:30:34.847299 1455521 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:30:34.847406 1455521 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:30:34.847470 1455521 kubeadm.go:322] OS: Linux
	I1107 23:30:34.847539 1455521 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:30:34.847606 1455521 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:30:34.847674 1455521 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:30:34.847744 1455521 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:30:34.847812 1455521 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:30:34.847882 1455521 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:30:34.847947 1455521 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1107 23:30:34.848013 1455521 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1107 23:30:34.848087 1455521 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1107 23:30:34.929834 1455521 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:30:34.930099 1455521 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:30:34.930232 1455521 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:30:35.200207 1455521 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:30:35.203072 1455521 out.go:204]   - Generating certificates and keys ...
	I1107 23:30:35.203271 1455521 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:30:35.203368 1455521 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:30:35.729330 1455521 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:30:36.253580 1455521 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:30:37.067751 1455521 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:30:38.195098 1455521 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:30:38.911971 1455521 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:30:38.912378 1455521 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-862145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:30:39.612367 1455521 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:30:39.612840 1455521 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-862145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:30:39.975843 1455521 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:30:40.422824 1455521 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:30:40.916292 1455521 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:30:40.916695 1455521 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:30:41.284020 1455521 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:30:41.567758 1455521 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:30:41.948148 1455521 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:30:42.954491 1455521 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:30:42.955245 1455521 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:30:42.958093 1455521 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:30:42.960305 1455521 out.go:204]   - Booting up control plane ...
	I1107 23:30:42.960474 1455521 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:30:42.960569 1455521 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:30:42.962534 1455521 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:30:42.973987 1455521 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:30:42.974907 1455521 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:30:42.975216 1455521 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:30:43.073651 1455521 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:30:50.076472 1455521 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002539 seconds
	I1107 23:30:50.076589 1455521 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:30:50.091745 1455521 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:30:50.619128 1455521 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:30:50.619319 1455521 kubeadm.go:322] [mark-control-plane] Marking the node addons-862145 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:30:51.133261 1455521 kubeadm.go:322] [bootstrap-token] Using token: m93yqw.hhjzpwv7x2klm9ou
	I1107 23:30:51.135150 1455521 out.go:204]   - Configuring RBAC rules ...
	I1107 23:30:51.135284 1455521 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:30:51.141523 1455521 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:30:51.151225 1455521 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:30:51.155840 1455521 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:30:51.161618 1455521 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:30:51.166687 1455521 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:30:51.181815 1455521 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:30:51.420800 1455521 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:30:51.579085 1455521 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:30:51.580401 1455521 kubeadm.go:322] 
	I1107 23:30:51.580470 1455521 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:30:51.580478 1455521 kubeadm.go:322] 
	I1107 23:30:51.580557 1455521 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:30:51.580567 1455521 kubeadm.go:322] 
	I1107 23:30:51.580591 1455521 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:30:51.580651 1455521 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:30:51.580708 1455521 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:30:51.580716 1455521 kubeadm.go:322] 
	I1107 23:30:51.580773 1455521 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:30:51.580782 1455521 kubeadm.go:322] 
	I1107 23:30:51.580827 1455521 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:30:51.580840 1455521 kubeadm.go:322] 
	I1107 23:30:51.580889 1455521 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:30:51.580971 1455521 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:30:51.581043 1455521 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:30:51.581052 1455521 kubeadm.go:322] 
	I1107 23:30:51.581130 1455521 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:30:51.581209 1455521 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:30:51.581218 1455521 kubeadm.go:322] 
	I1107 23:30:51.581307 1455521 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m93yqw.hhjzpwv7x2klm9ou \
	I1107 23:30:51.581417 1455521 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 \
	I1107 23:30:51.581485 1455521 kubeadm.go:322] 	--control-plane 
	I1107 23:30:51.581493 1455521 kubeadm.go:322] 
	I1107 23:30:51.581580 1455521 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:30:51.581588 1455521 kubeadm.go:322] 
	I1107 23:30:51.581664 1455521 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m93yqw.hhjzpwv7x2klm9ou \
	I1107 23:30:51.581768 1455521 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 
	I1107 23:30:51.585472 1455521 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:30:51.585590 1455521 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:30:51.585611 1455521 cni.go:84] Creating CNI manager for ""
	I1107 23:30:51.585619 1455521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:30:51.588603 1455521 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:30:51.590586 1455521 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:30:51.602618 1455521 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:30:51.602637 1455521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:30:51.627324 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:30:52.540178 1455521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:30:52.540364 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:52.540482 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=addons-862145 minikube.k8s.io/updated_at=2023_11_07T23_30_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:52.566282 1455521 ops.go:34] apiserver oom_adj: -16
	I1107 23:30:52.683033 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:52.834099 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:53.431417 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:53.931103 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:54.431694 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:54.930736 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:55.431322 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:55.930783 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:56.431316 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:56.931358 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:57.431355 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:57.931451 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:58.430859 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:58.931239 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:59.431358 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:30:59.931343 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:00.431090 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:00.931421 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:01.431697 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:01.930713 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:02.430762 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:02.931003 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:03.430941 1455521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:31:03.556265 1455521 kubeadm.go:1081] duration metric: took 11.015970097s to wait for elevateKubeSystemPrivileges.
	I1107 23:31:03.556292 1455521 kubeadm.go:406] StartCluster complete in 28.884483352s
	I1107 23:31:03.556309 1455521 settings.go:142] acquiring lock: {Name:mk87503ca622eddfd1b600486068357de065638c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:31:03.556944 1455521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:31:03.557348 1455521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/kubeconfig: {Name:mk5ec442d2fb6aea8291322e188521db23ee465e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:31:03.558139 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:31:03.558437 1455521 config.go:182] Loaded profile config "addons-862145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:31:03.558546 1455521 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1107 23:31:03.558626 1455521 addons.go:69] Setting volumesnapshots=true in profile "addons-862145"
	I1107 23:31:03.558643 1455521 addons.go:231] Setting addon volumesnapshots=true in "addons-862145"
	I1107 23:31:03.558700 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.559176 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.559689 1455521 addons.go:69] Setting cloud-spanner=true in profile "addons-862145"
	I1107 23:31:03.559710 1455521 addons.go:231] Setting addon cloud-spanner=true in "addons-862145"
	I1107 23:31:03.559755 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.560172 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.560575 1455521 addons.go:69] Setting metrics-server=true in profile "addons-862145"
	I1107 23:31:03.560602 1455521 addons.go:231] Setting addon metrics-server=true in "addons-862145"
	I1107 23:31:03.560640 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.561059 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.561541 1455521 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-862145"
	I1107 23:31:03.561589 1455521 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-862145"
	I1107 23:31:03.561627 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.562048 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.566747 1455521 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-862145"
	I1107 23:31:03.567000 1455521 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-862145"
	I1107 23:31:03.571820 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.572389 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.566903 1455521 addons.go:69] Setting registry=true in profile "addons-862145"
	I1107 23:31:03.576972 1455521 addons.go:231] Setting addon registry=true in "addons-862145"
	I1107 23:31:03.566913 1455521 addons.go:69] Setting storage-provisioner=true in profile "addons-862145"
	I1107 23:31:03.577065 1455521 addons.go:231] Setting addon storage-provisioner=true in "addons-862145"
	I1107 23:31:03.566918 1455521 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-862145"
	I1107 23:31:03.577138 1455521 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-862145"
	I1107 23:31:03.571684 1455521 addons.go:69] Setting default-storageclass=true in profile "addons-862145"
	I1107 23:31:03.585287 1455521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-862145"
	I1107 23:31:03.585681 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.571697 1455521 addons.go:69] Setting gcp-auth=true in profile "addons-862145"
	I1107 23:31:03.585906 1455521 mustload.go:65] Loading cluster: addons-862145
	I1107 23:31:03.571708 1455521 addons.go:69] Setting ingress-dns=true in profile "addons-862145"
	I1107 23:31:03.571726 1455521 addons.go:69] Setting inspektor-gadget=true in profile "addons-862145"
	I1107 23:31:03.587381 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.588055 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.571704 1455521 addons.go:69] Setting ingress=true in profile "addons-862145"
	I1107 23:31:03.606035 1455521 addons.go:231] Setting addon ingress=true in "addons-862145"
	I1107 23:31:03.606137 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.587172 1455521 addons.go:231] Setting addon ingress-dns=true in "addons-862145"
	I1107 23:31:03.606764 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.607211 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.609269 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.610030 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.587258 1455521 addons.go:231] Setting addon inspektor-gadget=true in "addons-862145"
	I1107 23:31:03.624753 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.625249 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.628739 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.662048 1455521 config.go:182] Loaded profile config "addons-862145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:31:03.662467 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.682803 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.756446 1455521 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1107 23:31:03.759468 1455521 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:31:03.759530 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:31:03.759617 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.767504 1455521 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1107 23:31:03.769630 1455521 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1107 23:31:03.769650 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1107 23:31:03.769775 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.784718 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1107 23:31:03.787050 1455521 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1107 23:31:03.787078 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1107 23:31:03.789037 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.806384 1455521 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1107 23:31:03.804774 1455521 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-862145" context rescaled to 1 replicas
	I1107 23:31:03.784704 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:31:03.809859 1455521 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:31:03.810772 1455521 addons.go:231] Setting addon default-storageclass=true in "addons-862145"
	I1107 23:31:03.811884 1455521 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:31:03.814515 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1107 23:31:03.814523 1455521 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1107 23:31:03.831621 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1107 23:31:03.822255 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.822269 1455521 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1107 23:31:03.822290 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1107 23:31:03.831680 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.832172 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.832183 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1107 23:31:03.832227 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.835929 1455521 out.go:177] * Verifying Kubernetes components...
	I1107 23:31:03.839651 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1107 23:31:03.841553 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1107 23:31:03.843897 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1107 23:31:03.846318 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1107 23:31:03.850557 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1107 23:31:03.847782 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.847875 1455521 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:31:03.853848 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1107 23:31:03.853931 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.885445 1455521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:31:03.926783 1455521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:31:03.902356 1455521 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-862145"
	I1107 23:31:03.932882 1455521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:31:03.929326 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1107 23:31:03.929422 1455521 out.go:177]   - Using image docker.io/registry:2.8.3
	I1107 23:31:03.929430 1455521 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:31:03.929475 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:03.938608 1455521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1107 23:31:03.936197 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:03.936217 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:31:03.936394 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:03.947263 1455521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:31:03.943078 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.972083 1455521 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:31:03.972107 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1107 23:31:03.972174 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.989627 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:03.990979 1455521 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1107 23:31:03.993210 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1107 23:31:03.993232 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1107 23:31:03.993302 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:03.991279 1455521 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1107 23:31:04.022194 1455521 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1107 23:31:04.022230 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1107 23:31:04.022304 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:04.018117 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.038088 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.044889 1455521 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:31:04.044918 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:31:04.044986 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:04.058671 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.066246 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.146580 1455521 out.go:177]   - Using image docker.io/busybox:stable
	I1107 23:31:04.153492 1455521 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1107 23:31:04.155392 1455521 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:31:04.155422 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1107 23:31:04.155494 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:04.153718 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.157855 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.203398 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.211595 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.214172 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.243486 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:04.399784 1455521 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:31:04.399806 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1107 23:31:04.449828 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1107 23:31:04.449860 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1107 23:31:04.463208 1455521 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1107 23:31:04.463236 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1107 23:31:04.491514 1455521 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:31:04.491553 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:31:04.530106 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:31:04.550137 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:31:04.562779 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1107 23:31:04.611344 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:31:04.614937 1455521 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:31:04.614969 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:31:04.615429 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:31:04.626567 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1107 23:31:04.626601 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1107 23:31:04.633705 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:31:04.638811 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1107 23:31:04.638844 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1107 23:31:04.642057 1455521 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1107 23:31:04.642088 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1107 23:31:04.652334 1455521 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1107 23:31:04.652360 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1107 23:31:04.730594 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:31:04.763803 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:31:04.768037 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1107 23:31:04.768070 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1107 23:31:04.770085 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1107 23:31:04.770107 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1107 23:31:04.775182 1455521 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:31:04.775203 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1107 23:31:04.814656 1455521 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1107 23:31:04.814688 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1107 23:31:04.969932 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1107 23:31:04.969968 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1107 23:31:04.978982 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1107 23:31:04.979007 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1107 23:31:04.982392 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1107 23:31:04.982424 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1107 23:31:04.984716 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:31:05.195446 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1107 23:31:05.195472 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1107 23:31:05.214307 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1107 23:31:05.214340 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1107 23:31:05.249863 1455521 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:31:05.249893 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1107 23:31:05.298633 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1107 23:31:05.298669 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1107 23:31:05.311990 1455521 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1107 23:31:05.312017 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1107 23:31:05.340304 1455521 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:31:05.340328 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1107 23:31:05.356617 1455521 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1107 23:31:05.356647 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1107 23:31:05.408012 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:31:05.423733 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:31:05.430529 1455521 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1107 23:31:05.430557 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1107 23:31:05.508786 1455521 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1107 23:31:05.508811 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1107 23:31:05.645497 1455521 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1107 23:31:05.645530 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1107 23:31:05.761686 1455521 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:31:05.761708 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1107 23:31:05.840399 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:31:07.024844 1455521 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.21029716s)
	I1107 23:31:07.024913 1455521 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1107 23:31:07.024988 1455521 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.139459701s)
	I1107 23:31:07.025877 1455521 node_ready.go:35] waiting up to 6m0s for node "addons-862145" to be "Ready" ...
	I1107 23:31:08.798272 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.268129445s)
	I1107 23:31:08.798398 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.248229489s)
	I1107 23:31:08.798458 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.235656299s)
	I1107 23:31:08.798514 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.18714701s)
	I1107 23:31:09.456146 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:09.946204 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.330733055s)
	I1107 23:31:09.946241 1455521 addons.go:467] Verifying addon ingress=true in "addons-862145"
	I1107 23:31:09.949493 1455521 out.go:177] * Verifying ingress addon...
	I1107 23:31:09.946448 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.312718991s)
	I1107 23:31:09.946578 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.215959414s)
	I1107 23:31:09.946650 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.182803615s)
	I1107 23:31:09.946680 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.961927803s)
	I1107 23:31:09.946824 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.5387777s)
	I1107 23:31:09.946902 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.523137442s)
	I1107 23:31:09.953292 1455521 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1107 23:31:09.953389 1455521 addons.go:467] Verifying addon metrics-server=true in "addons-862145"
	I1107 23:31:09.953592 1455521 addons.go:467] Verifying addon registry=true in "addons-862145"
	I1107 23:31:09.956756 1455521 out.go:177] * Verifying registry addon...
	W1107 23:31:09.953622 1455521 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:31:09.956874 1455521 retry.go:31] will retry after 194.381741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:31:09.960607 1455521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1107 23:31:09.970701 1455521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1107 23:31:09.970770 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:09.971912 1455521 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1107 23:31:09.971966 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:09.979650 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:09.982556 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:10.151467 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:31:10.342526 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.502032622s)
	I1107 23:31:10.342562 1455521 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-862145"
	I1107 23:31:10.347474 1455521 out.go:177] * Verifying csi-hostpath-driver addon...
	I1107 23:31:10.351693 1455521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1107 23:31:10.425338 1455521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1107 23:31:10.425364 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:10.478405 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:10.494210 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:10.495179 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:10.985115 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:10.987590 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:10.989724 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:11.287809 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.1362968s)
	I1107 23:31:11.483505 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:11.484871 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:11.487748 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:11.945538 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:11.992291 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:12.002921 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:12.003448 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:12.419177 1455521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1107 23:31:12.419323 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:12.463615 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:12.484939 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:12.487385 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:12.494245 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:12.661484 1455521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1107 23:31:12.717158 1455521 addons.go:231] Setting addon gcp-auth=true in "addons-862145"
	I1107 23:31:12.717353 1455521 host.go:66] Checking if "addons-862145" exists ...
	I1107 23:31:12.717838 1455521 cli_runner.go:164] Run: docker container inspect addons-862145 --format={{.State.Status}}
	I1107 23:31:12.767072 1455521 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1107 23:31:12.767135 1455521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-862145
	I1107 23:31:12.798538 1455521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/addons-862145/id_rsa Username:docker}
	I1107 23:31:12.958036 1455521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:31:12.960691 1455521 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1107 23:31:12.963429 1455521 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1107 23:31:12.963459 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1107 23:31:12.984979 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:12.985716 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:12.993191 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:13.025608 1455521 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1107 23:31:13.025636 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1107 23:31:13.051119 1455521 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:31:13.051147 1455521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1107 23:31:13.075872 1455521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:31:13.487155 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:13.495258 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:13.497995 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:13.949417 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:14.021221 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:14.022896 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:14.023722 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:14.111352 1455521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.035440436s)
	I1107 23:31:14.113131 1455521 addons.go:467] Verifying addon gcp-auth=true in "addons-862145"
	I1107 23:31:14.115548 1455521 out.go:177] * Verifying gcp-auth addon...
	I1107 23:31:14.118669 1455521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1107 23:31:14.127599 1455521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1107 23:31:14.127621 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:14.131273 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:14.492473 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:14.496941 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:14.497929 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:14.635697 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:14.984519 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:14.987158 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:14.991094 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:15.142064 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:15.487241 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:15.500544 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:15.506776 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:15.635607 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:15.984310 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:15.991294 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:15.993027 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:16.135993 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:16.446172 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:16.489096 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:16.497398 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:16.502446 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:16.635134 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:16.987247 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:16.988009 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:16.990639 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:17.135453 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:17.483954 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:17.485329 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:17.486927 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:17.638132 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:17.985323 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:17.985836 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:17.986825 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:18.135684 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:18.483940 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:18.485605 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:18.487207 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:18.635463 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:18.945273 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:18.984331 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:18.984786 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:18.987317 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:19.136507 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:19.484555 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:19.485597 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:19.487497 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:19.634928 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:19.984457 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:19.985299 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:19.987467 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:20.134897 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:20.484405 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:20.485661 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:20.487117 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:20.635388 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:20.945544 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:20.983431 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:20.984812 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:20.987226 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:21.135362 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:21.484788 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:21.485259 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:21.487886 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:21.635684 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:21.984176 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:21.985049 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:21.986810 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:22.135271 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:22.484574 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:22.486611 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:22.494583 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:22.634799 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:22.945926 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:22.984577 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:22.985212 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:22.986870 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:23.135823 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:23.487504 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:23.487833 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:23.489206 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:23.635438 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:23.985570 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:23.986361 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:23.987809 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:24.135197 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:24.483778 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:24.485522 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:24.486977 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:24.636011 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:24.985895 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:24.986403 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:24.988650 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:25.135046 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:25.445620 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:25.483210 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:25.485327 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:25.487044 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:25.635368 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:25.984394 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:25.985951 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:25.987775 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:26.135810 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:26.484816 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:26.485000 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:26.487138 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:26.634858 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:26.982857 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:26.984829 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:26.986752 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:27.135028 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:27.445966 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:27.482836 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:27.488742 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:27.492911 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:27.635735 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:27.984645 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:27.985699 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:27.988697 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:28.134883 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:28.484473 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:28.485936 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:28.487471 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:28.635069 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:28.984488 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:28.985797 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:28.988025 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:29.135751 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:29.483742 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:29.486593 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:29.488090 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:29.635608 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:29.946035 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:29.983976 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:29.984696 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:29.986939 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:30.135952 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:30.484335 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:30.485224 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:30.487406 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:30.635961 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:30.983263 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:30.984296 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:30.986275 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:31.135703 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:31.482900 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:31.485201 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:31.487300 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:31.635813 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:31.984859 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:31.985555 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:31.987148 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:32.135457 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:32.445784 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:32.484467 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:32.485109 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:32.487268 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:32.635602 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:32.983782 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:32.984332 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:32.987501 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:33.137247 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:33.484645 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:33.485184 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:33.490429 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:33.634902 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:33.983646 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:33.985377 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:33.988372 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:34.134601 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:34.483942 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:34.484851 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:34.486980 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:34.635332 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:34.945756 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:34.985965 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:34.986445 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:34.987431 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:35.134812 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:35.482918 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:35.485132 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:35.486984 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:35.635525 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:35.983407 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:35.984150 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:35.986577 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:36.135421 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:36.483769 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:36.485024 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:36.486757 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:36.635481 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:36.984160 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:36.985320 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:36.986812 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:37.135559 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:37.445796 1455521 node_ready.go:58] node "addons-862145" has status "Ready":"False"
	I1107 23:31:37.484188 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:37.486357 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:37.488976 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:37.635284 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:37.983411 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:37.986641 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:37.987402 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:38.135001 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:38.484016 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:38.486229 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:38.489138 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:38.635398 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:38.957581 1455521 node_ready.go:49] node "addons-862145" has status "Ready":"True"
	I1107 23:31:38.957609 1455521 node_ready.go:38] duration metric: took 31.931598707s waiting for node "addons-862145" to be "Ready" ...
	I1107 23:31:38.957620 1455521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:31:38.971309 1455521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qbq8g" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:38.993903 1455521 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1107 23:31:38.993929 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:38.998240 1455521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1107 23:31:38.998269 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:39.002364 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:39.172333 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:39.509578 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:39.511335 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:39.515167 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:39.635655 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:39.990631 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:39.993643 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:40.018552 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:40.135275 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:40.484231 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:40.485148 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:40.487526 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:40.636292 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:40.987617 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:40.987918 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:40.993184 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:41.054900 1455521 pod_ready.go:92] pod "coredns-5dd5756b68-qbq8g" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:41.054925 1455521 pod_ready.go:81] duration metric: took 2.083578994s waiting for pod "coredns-5dd5756b68-qbq8g" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.054948 1455521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.061131 1455521 pod_ready.go:92] pod "etcd-addons-862145" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:41.061154 1455521 pod_ready.go:81] duration metric: took 6.198603ms waiting for pod "etcd-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.061169 1455521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.067944 1455521 pod_ready.go:92] pod "kube-apiserver-addons-862145" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:41.067971 1455521 pod_ready.go:81] duration metric: took 6.793626ms waiting for pod "kube-apiserver-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.067983 1455521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.074192 1455521 pod_ready.go:92] pod "kube-controller-manager-addons-862145" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:41.074215 1455521 pod_ready.go:81] duration metric: took 6.224031ms waiting for pod "kube-controller-manager-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.074229 1455521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mlpwh" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.135040 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:41.346486 1455521 pod_ready.go:92] pod "kube-proxy-mlpwh" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:41.346513 1455521 pod_ready.go:81] duration metric: took 272.276639ms waiting for pod "kube-proxy-mlpwh" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.346524 1455521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.485041 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:41.487119 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:41.490028 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:41.641469 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:41.746404 1455521 pod_ready.go:92] pod "kube-scheduler-addons-862145" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:41.746432 1455521 pod_ready.go:81] duration metric: took 399.899653ms waiting for pod "kube-scheduler-addons-862145" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:41.746444 1455521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-dcc2j" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:42.019570 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:42.020805 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:42.021497 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:42.136026 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:42.495063 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:42.498804 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:42.502289 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:42.635668 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:43.022068 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:43.023778 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:43.043989 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:43.138265 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:43.486263 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:43.493969 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:43.497480 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:43.635732 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:43.989135 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:43.991615 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:43.994577 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:44.057505 1455521 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dcc2j" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:44.136100 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:44.485155 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:44.487929 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:44.495817 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:44.642858 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:45.018366 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:45.050981 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:45.052079 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:45.072798 1455521 pod_ready.go:92] pod "metrics-server-7c66d45ddc-dcc2j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:31:45.072828 1455521 pod_ready.go:81] duration metric: took 3.326376439s waiting for pod "metrics-server-7c66d45ddc-dcc2j" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:45.072843 1455521 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace to be "Ready" ...
	I1107 23:31:45.139008 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:45.491675 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:45.493309 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:45.494649 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:45.635892 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:45.983931 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:45.986868 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:45.989858 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:46.135034 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:46.485162 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:46.487509 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:46.489412 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:46.635887 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:46.984264 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:46.989246 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:46.990050 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:47.135150 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:47.367178 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:47.493483 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:47.494951 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:47.495948 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:47.637398 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:47.991778 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:47.993198 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:47.996419 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:48.136411 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:48.484854 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:48.487327 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:48.489057 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:48.636003 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:48.987261 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:49.000054 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:49.004361 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:49.144031 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:49.394086 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:49.488470 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:49.490995 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:49.499938 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:49.635426 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:49.990025 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:49.996389 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:50.002355 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:50.136259 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:50.487439 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:50.487812 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:50.494869 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:50.635549 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:50.988586 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:50.989359 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:50.992516 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:51.136613 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:51.499665 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:51.518318 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:51.519196 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:51.636127 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:51.859767 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:51.992662 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:51.996605 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:52.001268 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:52.136422 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:52.489685 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:52.493439 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:52.496690 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:52.636025 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:52.984650 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:52.987658 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:52.990099 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:53.141241 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:53.485067 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:53.486554 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:53.487623 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:53.635506 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:53.984061 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:53.986860 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:53.990231 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:54.135991 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:54.365546 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:54.485400 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:54.489305 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:54.494248 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:54.635398 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:54.984233 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:54.989810 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:54.990767 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:55.135015 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:55.485165 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:55.486894 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:55.491418 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:55.635139 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:55.985427 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:55.989508 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:55.990670 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:56.135822 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:56.486525 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:56.487731 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:56.491246 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:56.635834 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:56.859723 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:56.989218 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:56.989428 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:56.990488 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:57.136193 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:57.488881 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:57.490395 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:57.490807 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:57.639100 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:57.984450 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:57.985255 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:57.988704 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:58.134621 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:58.487215 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:58.487254 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:58.490322 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:58.635323 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:58.859775 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:31:58.985126 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:58.986944 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:58.990559 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:59.135730 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:59.486683 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:59.487518 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:59.490666 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:31:59.637747 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:31:59.984721 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:31:59.988221 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:31:59.990575 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:00.199663 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:00.490212 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:00.491770 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:00.492102 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:00.636181 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:00.860142 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:32:00.998674 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:01.000907 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:01.008750 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:01.136590 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:01.495304 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:01.498728 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:01.501193 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:01.635658 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:01.991117 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:01.991211 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:02.004983 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:02.135253 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:02.504721 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:02.513544 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:02.522852 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:02.641922 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:02.984999 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:02.986754 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:02.990138 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:03.135367 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:03.358920 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:32:03.485349 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:03.486543 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:03.489594 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:03.639014 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:03.985428 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:03.989779 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:03.990212 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:04.135814 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:04.484124 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:04.486802 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:04.489233 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:04.635668 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:04.988105 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:04.991936 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:05.003471 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:05.135972 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:05.382526 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:32:05.488603 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:05.495787 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:05.498028 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:05.635566 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:05.984276 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:05.987278 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:05.988243 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:06.135974 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:06.484498 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:06.487137 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:06.489354 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:06.635373 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:06.984478 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:06.987786 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:06.990104 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:07.135411 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:07.493324 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:07.494939 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:07.499353 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:07.635785 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:07.860235 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:32:07.996799 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:07.997509 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:07.997632 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:08.135712 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:08.497353 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:08.500041 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:08.505001 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:08.643922 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:08.984527 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:08.988235 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:08.990611 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:09.136398 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:09.484480 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:09.487075 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:09.490297 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:09.635038 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:09.984369 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:09.987567 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:09.989025 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:10.135102 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:10.361895 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:32:10.485364 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:10.486508 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:10.489255 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:10.635078 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:10.986314 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:10.991293 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:10.993409 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:11.135598 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:11.486082 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:11.499810 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:11.502395 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:11.635608 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:11.989360 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:11.998538 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:12.004246 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:12.138413 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:12.380417 1455521 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"False"
	I1107 23:32:12.485158 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:12.492163 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:12.494952 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:12.636457 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:12.986481 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:12.995923 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:12.996632 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:13.137861 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:13.485211 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:13.486377 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:13.490480 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:13.636032 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:13.990748 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:13.993319 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:13.994610 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:14.136083 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:14.374277 1455521 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace has status "Ready":"True"
	I1107 23:32:14.374309 1455521 pod_ready.go:81] duration metric: took 29.301458937s waiting for pod "nvidia-device-plugin-daemonset-2mxvg" in "kube-system" namespace to be "Ready" ...
	I1107 23:32:14.374339 1455521 pod_ready.go:38] duration metric: took 35.41670816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:32:14.374367 1455521 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:32:14.374398 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1107 23:32:14.374485 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 23:32:14.425101 1455521 cri.go:89] found id: "a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567"
	I1107 23:32:14.425180 1455521 cri.go:89] found id: ""
	I1107 23:32:14.425202 1455521 logs.go:284] 1 containers: [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567]
	I1107 23:32:14.425290 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.430035 1455521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1107 23:32:14.430107 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 23:32:14.477565 1455521 cri.go:89] found id: "f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed"
	I1107 23:32:14.477591 1455521 cri.go:89] found id: ""
	I1107 23:32:14.477600 1455521 logs.go:284] 1 containers: [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed]
	I1107 23:32:14.477665 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.483446 1455521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1107 23:32:14.483560 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 23:32:14.487193 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:14.490913 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:14.491534 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:14.533199 1455521 cri.go:89] found id: "155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9"
	I1107 23:32:14.533222 1455521 cri.go:89] found id: ""
	I1107 23:32:14.533230 1455521 logs.go:284] 1 containers: [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9]
	I1107 23:32:14.533286 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.538288 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1107 23:32:14.538361 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 23:32:14.583681 1455521 cri.go:89] found id: "ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b"
	I1107 23:32:14.583704 1455521 cri.go:89] found id: ""
	I1107 23:32:14.583712 1455521 logs.go:284] 1 containers: [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b]
	I1107 23:32:14.583768 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.588364 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1107 23:32:14.588438 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 23:32:14.632999 1455521 cri.go:89] found id: "a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f"
	I1107 23:32:14.633063 1455521 cri.go:89] found id: ""
	I1107 23:32:14.633084 1455521 logs.go:284] 1 containers: [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f]
	I1107 23:32:14.633170 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.636598 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:14.639152 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 23:32:14.639230 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 23:32:14.691358 1455521 cri.go:89] found id: "551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee"
	I1107 23:32:14.691381 1455521 cri.go:89] found id: ""
	I1107 23:32:14.691389 1455521 logs.go:284] 1 containers: [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee]
	I1107 23:32:14.691443 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.696213 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1107 23:32:14.696283 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1107 23:32:14.749142 1455521 cri.go:89] found id: "2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6"
	I1107 23:32:14.749163 1455521 cri.go:89] found id: ""
	I1107 23:32:14.749171 1455521 logs.go:284] 1 containers: [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6]
	I1107 23:32:14.749249 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:14.754453 1455521 logs.go:123] Gathering logs for describe nodes ...
	I1107 23:32:14.754479 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1107 23:32:14.941049 1455521 logs.go:123] Gathering logs for kube-proxy [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f] ...
	I1107 23:32:14.941082 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f"
	I1107 23:32:14.985618 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:14.988121 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:14.990799 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:14.996696 1455521 logs.go:123] Gathering logs for kindnet [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6] ...
	I1107 23:32:14.996726 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6"
	I1107 23:32:15.097040 1455521 logs.go:123] Gathering logs for container status ...
	I1107 23:32:15.097071 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 23:32:15.135893 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:15.164224 1455521 logs.go:123] Gathering logs for dmesg ...
	I1107 23:32:15.164439 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 23:32:15.207715 1455521 logs.go:123] Gathering logs for kube-apiserver [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567] ...
	I1107 23:32:15.207799 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567"
	I1107 23:32:15.303320 1455521 logs.go:123] Gathering logs for etcd [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed] ...
	I1107 23:32:15.303402 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed"
	I1107 23:32:15.382522 1455521 logs.go:123] Gathering logs for coredns [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9] ...
	I1107 23:32:15.382556 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9"
	I1107 23:32:15.427729 1455521 logs.go:123] Gathering logs for kube-scheduler [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b] ...
	I1107 23:32:15.427758 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b"
	I1107 23:32:15.477586 1455521 logs.go:123] Gathering logs for kube-controller-manager [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee] ...
	I1107 23:32:15.477618 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee"
	I1107 23:32:15.488457 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:15.492949 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:15.493497 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:15.591396 1455521 logs.go:123] Gathering logs for CRI-O ...
	I1107 23:32:15.591441 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1107 23:32:15.637272 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:15.685119 1455521 logs.go:123] Gathering logs for kubelet ...
	I1107 23:32:15.685158 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 23:32:15.753965 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:05 addons-862145 kubelet[1339]: W1107 23:31:05.818652    1339 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.754198 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:05 addons-862145 kubelet[1339]: E1107 23:31:05.818829    1339 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.760286 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.731634    1339 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.760491 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.731679    1339 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.760657 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.734956    1339 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.760840 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.734999    1339 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.761602 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.748697    1339 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.761801 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.748746    1339 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	I1107 23:32:15.790617 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:15.790644 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 23:32:15.790728 1455521 out.go:239] X Problems detected in kubelet:
	W1107 23:32:15.790743 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.731679    1339 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.790750 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.734956    1339 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.790763 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.734999    1339 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.790772 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.748697    1339 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	W1107 23:32:15.790783 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.748746    1339 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	I1107 23:32:15.790806 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:15.790813 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:32:15.988404 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:15.988514 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:15.990012 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:16.136458 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:16.484871 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:16.488141 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:16.491685 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:16.635691 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:16.987936 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:16.989669 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:16.990872 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:17.136258 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:17.485387 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:17.486778 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:17.489101 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:17.635179 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:17.985105 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:17.987911 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:17.992858 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:18.135917 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:18.485347 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:18.486840 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:18.488866 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:18.635547 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:18.992361 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:18.993946 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:18.996598 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:19.139197 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:19.484098 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:19.493314 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:19.496393 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:19.635325 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:19.991520 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:19.995453 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:19.996684 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:20.135900 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:20.492272 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:20.494331 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:20.500853 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:20.635571 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:20.989167 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:20.990948 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:20.998340 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:21.140515 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:21.489752 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:21.492124 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:21.494725 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:21.636453 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:21.993068 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:21.995651 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:22.000455 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:22.135896 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:22.490155 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:22.495448 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:22.496999 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:22.637165 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:22.985317 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:22.986563 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:22.989643 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:32:23.138018 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:23.502630 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:23.503992 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:23.510779 1455521 kapi.go:107] duration metric: took 1m13.55016993s to wait for kubernetes.io/minikube-addons=registry ...
	I1107 23:32:23.636476 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:23.985224 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:23.986476 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:24.135224 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:24.486851 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:24.487853 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:24.639892 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:24.984905 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:24.986769 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:25.136450 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:25.486612 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:25.487823 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:25.635912 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:25.791199 1455521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:32:25.807251 1455521 api_server.go:72] duration metric: took 1m21.98492877s to wait for apiserver process to appear ...
	I1107 23:32:25.807324 1455521 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:32:25.807364 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1107 23:32:25.807421 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 23:32:25.856328 1455521 cri.go:89] found id: "a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567"
	I1107 23:32:25.856352 1455521 cri.go:89] found id: ""
	I1107 23:32:25.856361 1455521 logs.go:284] 1 containers: [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567]
	I1107 23:32:25.856413 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:25.861378 1455521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1107 23:32:25.861459 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 23:32:25.913931 1455521 cri.go:89] found id: "f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed"
	I1107 23:32:25.914022 1455521 cri.go:89] found id: ""
	I1107 23:32:25.914046 1455521 logs.go:284] 1 containers: [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed]
	I1107 23:32:25.914120 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:25.918742 1455521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1107 23:32:25.918812 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 23:32:25.962060 1455521 cri.go:89] found id: "155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9"
	I1107 23:32:25.962083 1455521 cri.go:89] found id: ""
	I1107 23:32:25.962091 1455521 logs.go:284] 1 containers: [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9]
	I1107 23:32:25.962149 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:25.966964 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1107 23:32:25.967035 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 23:32:25.984334 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:25.988702 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:26.040567 1455521 cri.go:89] found id: "ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b"
	I1107 23:32:26.040591 1455521 cri.go:89] found id: ""
	I1107 23:32:26.040599 1455521 logs.go:284] 1 containers: [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b]
	I1107 23:32:26.040660 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:26.047017 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1107 23:32:26.047105 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 23:32:26.105028 1455521 cri.go:89] found id: "a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f"
	I1107 23:32:26.105106 1455521 cri.go:89] found id: ""
	I1107 23:32:26.105128 1455521 logs.go:284] 1 containers: [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f]
	I1107 23:32:26.105217 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:26.111730 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 23:32:26.111840 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 23:32:26.136358 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:26.168279 1455521 cri.go:89] found id: "551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee"
	I1107 23:32:26.168304 1455521 cri.go:89] found id: ""
	I1107 23:32:26.168322 1455521 logs.go:284] 1 containers: [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee]
	I1107 23:32:26.168380 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:26.173032 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1107 23:32:26.173123 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1107 23:32:26.233763 1455521 cri.go:89] found id: "2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6"
	I1107 23:32:26.233787 1455521 cri.go:89] found id: ""
	I1107 23:32:26.233796 1455521 logs.go:284] 1 containers: [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6]
	I1107 23:32:26.233878 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:26.239077 1455521 logs.go:123] Gathering logs for coredns [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9] ...
	I1107 23:32:26.239102 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9"
	I1107 23:32:26.290635 1455521 logs.go:123] Gathering logs for kube-proxy [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f] ...
	I1107 23:32:26.290672 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f"
	I1107 23:32:26.345775 1455521 logs.go:123] Gathering logs for CRI-O ...
	I1107 23:32:26.345808 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1107 23:32:26.448983 1455521 logs.go:123] Gathering logs for container status ...
	I1107 23:32:26.449065 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 23:32:26.495741 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:26.496338 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:26.587248 1455521 logs.go:123] Gathering logs for kubelet ...
	I1107 23:32:26.587319 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 23:32:26.639868 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1107 23:32:26.660246 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:05 addons-862145 kubelet[1339]: W1107 23:31:05.818652    1339 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.660587 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:05 addons-862145 kubelet[1339]: E1107 23:31:05.818829    1339 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.666842 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.731634    1339 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.667098 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.731679    1339 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.667292 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.734956    1339 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.667524 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.734999    1339 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.668309 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.748697    1339 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	W1107 23:32:26.668534 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.748746    1339 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	I1107 23:32:26.703932 1455521 logs.go:123] Gathering logs for kube-apiserver [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567] ...
	I1107 23:32:26.704014 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567"
	I1107 23:32:26.883810 1455521 logs.go:123] Gathering logs for etcd [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed] ...
	I1107 23:32:26.883888 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed"
	I1107 23:32:26.992078 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:26.993101 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:26.993286 1455521 logs.go:123] Gathering logs for kube-scheduler [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b] ...
	I1107 23:32:26.993306 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b"
	I1107 23:32:27.133489 1455521 logs.go:123] Gathering logs for kube-controller-manager [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee] ...
	I1107 23:32:27.133535 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee"
	I1107 23:32:27.140128 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:27.333808 1455521 logs.go:123] Gathering logs for kindnet [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6] ...
	I1107 23:32:27.333863 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6"
	I1107 23:32:27.452079 1455521 logs.go:123] Gathering logs for dmesg ...
	I1107 23:32:27.452114 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 23:32:27.487965 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:27.491836 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:27.530478 1455521 logs.go:123] Gathering logs for describe nodes ...
	I1107 23:32:27.530508 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1107 23:32:27.635588 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:27.816289 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:27.816317 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 23:32:27.816377 1455521 out.go:239] X Problems detected in kubelet:
	W1107 23:32:27.816394 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.731679    1339 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:27.816402 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.734956    1339 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:27.816414 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.734999    1339 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:27.816421 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.748697    1339 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	W1107 23:32:27.816586 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.748746    1339 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	I1107 23:32:27.816595 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:27.816609 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:32:27.986881 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:27.990603 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:28.137179 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:28.491258 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:28.492557 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:28.636097 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:28.986834 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:28.987988 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:29.139238 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:29.490679 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:29.493531 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:29.635777 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:29.984584 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:29.986119 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:30.136072 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:30.487868 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:30.488512 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:30.635971 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:30.984668 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:30.985780 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:31.136241 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:31.488440 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:31.490057 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:31.635742 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:31.987040 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:31.987953 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:32.135813 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:32.487342 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:32.494252 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:32.637105 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:32.985644 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:32.986987 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:33.134980 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:33.484470 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:33.486994 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:33.635735 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:33.985598 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:33.987618 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:34.135862 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:34.485369 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:34.488642 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:34.638079 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:34.988824 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:34.991185 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:35.135240 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:35.488606 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:35.494702 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:35.635752 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:35.988685 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:35.990793 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:36.136560 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:36.487454 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:36.489841 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:36.636360 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:36.992293 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:36.993561 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:37.136170 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:37.494113 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:37.497499 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:37.636087 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:37.818197 1455521 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1107 23:32:37.827877 1455521 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1107 23:32:37.829206 1455521 api_server.go:141] control plane version: v1.28.3
	I1107 23:32:37.829232 1455521 api_server.go:131] duration metric: took 12.021892129s to wait for apiserver health ...
	I1107 23:32:37.829242 1455521 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:32:37.829262 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1107 23:32:37.829326 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 23:32:37.889760 1455521 cri.go:89] found id: "a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567"
	I1107 23:32:37.889786 1455521 cri.go:89] found id: ""
	I1107 23:32:37.889795 1455521 logs.go:284] 1 containers: [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567]
	I1107 23:32:37.889847 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:37.895044 1455521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1107 23:32:37.895134 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 23:32:37.948546 1455521 cri.go:89] found id: "f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed"
	I1107 23:32:37.948580 1455521 cri.go:89] found id: ""
	I1107 23:32:37.948594 1455521 logs.go:284] 1 containers: [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed]
	I1107 23:32:37.948655 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:37.956694 1455521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1107 23:32:37.956772 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 23:32:37.992452 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:37.994401 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:38.047886 1455521 cri.go:89] found id: "155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9"
	I1107 23:32:38.047943 1455521 cri.go:89] found id: ""
	I1107 23:32:38.047962 1455521 logs.go:284] 1 containers: [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9]
	I1107 23:32:38.048031 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:38.055117 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1107 23:32:38.055202 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 23:32:38.142573 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:38.188791 1455521 cri.go:89] found id: "ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b"
	I1107 23:32:38.188871 1455521 cri.go:89] found id: ""
	I1107 23:32:38.188893 1455521 logs.go:284] 1 containers: [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b]
	I1107 23:32:38.188977 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:38.195173 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1107 23:32:38.195286 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 23:32:38.325096 1455521 cri.go:89] found id: "a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f"
	I1107 23:32:38.325158 1455521 cri.go:89] found id: ""
	I1107 23:32:38.325182 1455521 logs.go:284] 1 containers: [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f]
	I1107 23:32:38.325266 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:38.339482 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 23:32:38.339642 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 23:32:38.412271 1455521 cri.go:89] found id: "551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee"
	I1107 23:32:38.412345 1455521 cri.go:89] found id: ""
	I1107 23:32:38.412369 1455521 logs.go:284] 1 containers: [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee]
	I1107 23:32:38.412452 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:38.417414 1455521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1107 23:32:38.417547 1455521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1107 23:32:38.488826 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:38.490902 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:38.503173 1455521 cri.go:89] found id: "2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6"
	I1107 23:32:38.503238 1455521 cri.go:89] found id: ""
	I1107 23:32:38.503260 1455521 logs.go:284] 1 containers: [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6]
	I1107 23:32:38.503346 1455521 ssh_runner.go:195] Run: which crictl
	I1107 23:32:38.510624 1455521 logs.go:123] Gathering logs for coredns [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9] ...
	I1107 23:32:38.510711 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9"
	I1107 23:32:38.571193 1455521 logs.go:123] Gathering logs for kube-controller-manager [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee] ...
	I1107 23:32:38.571266 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee"
	I1107 23:32:38.635541 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:38.712044 1455521 logs.go:123] Gathering logs for CRI-O ...
	I1107 23:32:38.712138 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1107 23:32:38.826392 1455521 logs.go:123] Gathering logs for dmesg ...
	I1107 23:32:38.826475 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 23:32:38.854626 1455521 logs.go:123] Gathering logs for describe nodes ...
	I1107 23:32:38.854704 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1107 23:32:39.013371 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:39.014569 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:39.153021 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:39.165093 1455521 logs.go:123] Gathering logs for etcd [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed] ...
	I1107 23:32:39.165484 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed"
	I1107 23:32:39.323835 1455521 logs.go:123] Gathering logs for kube-scheduler [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b] ...
	I1107 23:32:39.324441 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b"
	I1107 23:32:39.403717 1455521 logs.go:123] Gathering logs for kube-proxy [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f] ...
	I1107 23:32:39.403790 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f"
	I1107 23:32:39.493248 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:39.500816 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:39.502141 1455521 logs.go:123] Gathering logs for kindnet [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6] ...
	I1107 23:32:39.502177 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6"
	I1107 23:32:39.636407 1455521 logs.go:123] Gathering logs for container status ...
	I1107 23:32:39.636438 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 23:32:39.648742 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:39.738416 1455521 logs.go:123] Gathering logs for kubelet ...
	I1107 23:32:39.738488 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 23:32:39.807630 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:05 addons-862145 kubelet[1339]: W1107 23:31:05.818652    1339 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.807860 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:05 addons-862145 kubelet[1339]: E1107 23:31:05.818829    1339 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.814097 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.731634    1339 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.815965 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.731679    1339 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.816164 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.734956    1339 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.816351 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.734999    1339 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.817108 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.748697    1339 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.817305 1455521 logs.go:138] Found kubelet problem: Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.748746    1339 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	I1107 23:32:39.852438 1455521 logs.go:123] Gathering logs for kube-apiserver [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567] ...
	I1107 23:32:39.852473 1455521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567"
	I1107 23:32:39.964828 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:39.964899 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 23:32:39.964990 1455521 out.go:239] X Problems detected in kubelet:
	W1107 23:32:39.965031 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.731679    1339 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.965301 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.734956    1339 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.965338 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.734999    1339 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-862145" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.965383 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: W1107 23:31:38.748697    1339 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	W1107 23:32:39.965428 1455521 out.go:239]   Nov 07 23:31:38 addons-862145 kubelet[1339]: E1107 23:31:38.748746    1339 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-862145" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-862145' and this object
	I1107 23:32:39.965482 1455521 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:39.965508 1455521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:32:39.990819 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:39.992157 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:40.136590 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:40.487648 1455521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:32:40.489256 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:40.656510 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:40.984746 1455521 kapi.go:107] duration metric: took 1m31.031452073s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1107 23:32:40.986481 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:41.135027 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:41.484424 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:41.635527 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:41.985680 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:42.138938 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:42.485813 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:42.636776 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:42.994544 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:43.135167 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:43.484588 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:43.635456 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:43.986055 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:44.136638 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:44.484835 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:44.635364 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:44.984257 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:45.152927 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:45.485428 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:45.635827 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:45.988937 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:46.136929 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:32:46.486261 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:46.639654 1455521 kapi.go:107] duration metric: took 1m32.520988046s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1107 23:32:46.641880 1455521 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-862145 cluster.
	I1107 23:32:46.643666 1455521 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1107 23:32:46.645693 1455521 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1107 23:32:46.984718 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:47.483715 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:47.984695 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:48.485471 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:48.985066 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:49.484773 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:49.979661 1455521 system_pods.go:59] 18 kube-system pods found
	I1107 23:32:49.979734 1455521 system_pods.go:61] "coredns-5dd5756b68-qbq8g" [2e5ed0bc-4c45-4571-bb69-88126cd88ed1] Running
	I1107 23:32:49.979757 1455521 system_pods.go:61] "csi-hostpath-attacher-0" [f14943a2-8e6f-4a4c-a895-75f8a615e361] Running
	I1107 23:32:49.979781 1455521 system_pods.go:61] "csi-hostpath-resizer-0" [2e7e8ce1-9b88-46f4-bfdf-7f5acdd824e1] Running
	I1107 23:32:49.979824 1455521 system_pods.go:61] "csi-hostpathplugin-6nmlj" [6341e0f0-a42f-436c-8546-8d02971ac05d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:32:49.979855 1455521 system_pods.go:61] "etcd-addons-862145" [0c5381f5-109f-4601-985e-5b822a645ad6] Running
	I1107 23:32:49.979880 1455521 system_pods.go:61] "kindnet-lmzp5" [95aab8c0-396b-4742-a1e3-0be8049b871c] Running
	I1107 23:32:49.979900 1455521 system_pods.go:61] "kube-apiserver-addons-862145" [907b448b-1385-499d-947b-0616a5a73eda] Running
	I1107 23:32:49.979933 1455521 system_pods.go:61] "kube-controller-manager-addons-862145" [df5b6b20-5d54-42df-ae02-4fa64f8e37e8] Running
	I1107 23:32:49.979961 1455521 system_pods.go:61] "kube-ingress-dns-minikube" [795e47bf-77e6-4fa6-b2c9-f188d1ec7de2] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:32:49.979981 1455521 system_pods.go:61] "kube-proxy-mlpwh" [2508e74d-cb27-4551-adad-d29c9ffdd533] Running
	I1107 23:32:49.980001 1455521 system_pods.go:61] "kube-scheduler-addons-862145" [91729777-b6b5-43e0-93a2-ed8a15b62ba2] Running
	I1107 23:32:49.980031 1455521 system_pods.go:61] "metrics-server-7c66d45ddc-dcc2j" [1ba7f677-6e2b-446c-849f-3ac9b4119c36] Running
	I1107 23:32:49.980053 1455521 system_pods.go:61] "nvidia-device-plugin-daemonset-2mxvg" [af169897-3b36-43a5-87a8-fead1e07bc56] Running
	I1107 23:32:49.980071 1455521 system_pods.go:61] "registry-proxy-szzdm" [c1a5f788-6327-49f4-8217-bf0ad8166124] Running
	I1107 23:32:49.980090 1455521 system_pods.go:61] "registry-qmpph" [b847a662-a103-451a-bab8-36206ce3090e] Running
	I1107 23:32:49.980109 1455521 system_pods.go:61] "snapshot-controller-58dbcc7b99-47d8l" [e94b2508-a6f2-4377-8b9c-138e8563c62e] Running
	I1107 23:32:49.980138 1455521 system_pods.go:61] "snapshot-controller-58dbcc7b99-n8mcr" [d5a43254-2c44-46af-bbcc-843321638028] Running
	I1107 23:32:49.980165 1455521 system_pods.go:61] "storage-provisioner" [c1bd0b30-59a6-4754-a073-12541d44fd1c] Running
	I1107 23:32:49.980187 1455521 system_pods.go:74] duration metric: took 12.150938351s to wait for pod list to return data ...
	I1107 23:32:49.980206 1455521 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:32:49.987437 1455521 default_sa.go:45] found service account: "default"
	I1107 23:32:49.987463 1455521 default_sa.go:55] duration metric: took 7.236429ms for default service account to be created ...
	I1107 23:32:49.987473 1455521 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:32:49.991555 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:49.999538 1455521 system_pods.go:86] 18 kube-system pods found
	I1107 23:32:49.999584 1455521 system_pods.go:89] "coredns-5dd5756b68-qbq8g" [2e5ed0bc-4c45-4571-bb69-88126cd88ed1] Running
	I1107 23:32:49.999594 1455521 system_pods.go:89] "csi-hostpath-attacher-0" [f14943a2-8e6f-4a4c-a895-75f8a615e361] Running
	I1107 23:32:49.999600 1455521 system_pods.go:89] "csi-hostpath-resizer-0" [2e7e8ce1-9b88-46f4-bfdf-7f5acdd824e1] Running
	I1107 23:32:49.999609 1455521 system_pods.go:89] "csi-hostpathplugin-6nmlj" [6341e0f0-a42f-436c-8546-8d02971ac05d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:32:49.999616 1455521 system_pods.go:89] "etcd-addons-862145" [0c5381f5-109f-4601-985e-5b822a645ad6] Running
	I1107 23:32:49.999626 1455521 system_pods.go:89] "kindnet-lmzp5" [95aab8c0-396b-4742-a1e3-0be8049b871c] Running
	I1107 23:32:49.999631 1455521 system_pods.go:89] "kube-apiserver-addons-862145" [907b448b-1385-499d-947b-0616a5a73eda] Running
	I1107 23:32:49.999638 1455521 system_pods.go:89] "kube-controller-manager-addons-862145" [df5b6b20-5d54-42df-ae02-4fa64f8e37e8] Running
	I1107 23:32:49.999645 1455521 system_pods.go:89] "kube-ingress-dns-minikube" [795e47bf-77e6-4fa6-b2c9-f188d1ec7de2] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:32:49.999650 1455521 system_pods.go:89] "kube-proxy-mlpwh" [2508e74d-cb27-4551-adad-d29c9ffdd533] Running
	I1107 23:32:49.999656 1455521 system_pods.go:89] "kube-scheduler-addons-862145" [91729777-b6b5-43e0-93a2-ed8a15b62ba2] Running
	I1107 23:32:49.999661 1455521 system_pods.go:89] "metrics-server-7c66d45ddc-dcc2j" [1ba7f677-6e2b-446c-849f-3ac9b4119c36] Running
	I1107 23:32:49.999667 1455521 system_pods.go:89] "nvidia-device-plugin-daemonset-2mxvg" [af169897-3b36-43a5-87a8-fead1e07bc56] Running
	I1107 23:32:49.999672 1455521 system_pods.go:89] "registry-proxy-szzdm" [c1a5f788-6327-49f4-8217-bf0ad8166124] Running
	I1107 23:32:49.999676 1455521 system_pods.go:89] "registry-qmpph" [b847a662-a103-451a-bab8-36206ce3090e] Running
	I1107 23:32:49.999682 1455521 system_pods.go:89] "snapshot-controller-58dbcc7b99-47d8l" [e94b2508-a6f2-4377-8b9c-138e8563c62e] Running
	I1107 23:32:49.999686 1455521 system_pods.go:89] "snapshot-controller-58dbcc7b99-n8mcr" [d5a43254-2c44-46af-bbcc-843321638028] Running
	I1107 23:32:49.999691 1455521 system_pods.go:89] "storage-provisioner" [c1bd0b30-59a6-4754-a073-12541d44fd1c] Running
	I1107 23:32:49.999699 1455521 system_pods.go:126] duration metric: took 12.220462ms to wait for k8s-apps to be running ...
	I1107 23:32:49.999707 1455521 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:32:49.999777 1455521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:32:50.021399 1455521 system_svc.go:56] duration metric: took 21.679642ms WaitForService to wait for kubelet.
	I1107 23:32:50.021426 1455521 kubeadm.go:581] duration metric: took 1m46.199109443s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:32:50.021471 1455521 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:32:50.025668 1455521 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:32:50.025707 1455521 node_conditions.go:123] node cpu capacity is 2
	I1107 23:32:50.025722 1455521 node_conditions.go:105] duration metric: took 4.245258ms to run NodePressure ...
	I1107 23:32:50.025735 1455521 start.go:228] waiting for startup goroutines ...
	I1107 23:32:50.484859 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:50.985852 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:51.485724 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:51.985355 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:52.484252 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:52.983874 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:53.485069 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:53.985720 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:54.491180 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:54.985011 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:55.484814 1455521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:32:55.984721 1455521 kapi.go:107] duration metric: took 1m45.633027246s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1107 23:32:55.986821 1455521 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, inspektor-gadget, storage-provisioner, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1107 23:32:55.988671 1455521 addons.go:502] enable addons completed in 1m52.430108576s: enabled=[ingress-dns nvidia-device-plugin cloud-spanner default-storageclass inspektor-gadget storage-provisioner metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1107 23:32:55.988722 1455521 start.go:233] waiting for cluster config update ...
	I1107 23:32:55.988759 1455521 start.go:242] writing updated cluster config ...
	I1107 23:32:55.989063 1455521 ssh_runner.go:195] Run: rm -f paused
	I1107 23:32:56.081206 1455521 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:32:56.082946 1455521 out.go:177] * Done! kubectl is now configured to use "addons-862145" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 07 23:36:51 addons-862145 conmon[9208]: conmon 9dfafbbda4bf17738a94 <ninfo>: container 9218 exited with status 1
	Nov 07 23:36:51 addons-862145 crio[878]: time="2023-11-07 23:36:51.666155797Z" level=info msg="Started container" PID=9218 containerID=9dfafbbda4bf17738a94a6e082388e55530f9b404c0cb435c32d5f2b85c7c7fb description=default/hello-world-app-5d77478584-tszj6/hello-world-app id=e313610d-c327-4c75-816d-23e09d964c9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cfba0b870a12a7d694c2cdd4b00fe33c2e05394b3d6b35d1b3add1c39fb2aef
	Nov 07 23:36:51 addons-862145 crio[878]: time="2023-11-07 23:36:51.962926349Z" level=info msg="Removing container: ace48f6af706017e73d54f795bedf327b94a60c44551f84c49766583c2906b18" id=3fa5c6f9-7532-4a88-8793-c85e4505dc49 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:36:51 addons-862145 crio[878]: time="2023-11-07 23:36:51.980961796Z" level=info msg="Removed container ace48f6af706017e73d54f795bedf327b94a60c44551f84c49766583c2906b18: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=3fa5c6f9-7532-4a88-8793-c85e4505dc49 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:36:51 addons-862145 crio[878]: time="2023-11-07 23:36:51.982540448Z" level=info msg="Removing container: 4239332a8d5a4d794d189535cff110b43692c20f59e09c212e5c3b314915fec2" id=edf6798e-ff54-476a-81ba-cc640fc45f5a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:36:52 addons-862145 crio[878]: time="2023-11-07 23:36:52.012757986Z" level=info msg="Removed container 4239332a8d5a4d794d189535cff110b43692c20f59e09c212e5c3b314915fec2: default/hello-world-app-5d77478584-tszj6/hello-world-app" id=edf6798e-ff54-476a-81ba-cc640fc45f5a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:36:52 addons-862145 crio[878]: time="2023-11-07 23:36:52.014381857Z" level=info msg="Stopping pod sandbox: a2b845383321c2f7530100c774aed65cb5cc86c0f6867b174487a1ffac28c0db" id=f0897327-99ba-4ee6-9a40-644fbbdbb4a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 07 23:36:52 addons-862145 crio[878]: time="2023-11-07 23:36:52.014562368Z" level=info msg="Stopped pod sandbox (already stopped): a2b845383321c2f7530100c774aed65cb5cc86c0f6867b174487a1ffac28c0db" id=f0897327-99ba-4ee6-9a40-644fbbdbb4a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 07 23:36:52 addons-862145 crio[878]: time="2023-11-07 23:36:52.014953782Z" level=info msg="Removing pod sandbox: a2b845383321c2f7530100c774aed65cb5cc86c0f6867b174487a1ffac28c0db" id=38acc915-aa4e-4d12-bc8f-b98fed06fad8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 07 23:36:52 addons-862145 crio[878]: time="2023-11-07 23:36:52.026466629Z" level=info msg="Removed pod sandbox: a2b845383321c2f7530100c774aed65cb5cc86c0f6867b174487a1ffac28c0db" id=38acc915-aa4e-4d12-bc8f-b98fed06fad8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 07 23:36:54 addons-862145 crio[878]: time="2023-11-07 23:36:54.073628413Z" level=info msg="Stopping container: 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79 (timeout: 2s)" id=e0aa3c7d-6707-4579-99b7-b634f9ceff45 name=/runtime.v1.RuntimeService/StopContainer
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.083800582Z" level=warning msg="Stopping container 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=e0aa3c7d-6707-4579-99b7-b634f9ceff45 name=/runtime.v1.RuntimeService/StopContainer
	Nov 07 23:36:56 addons-862145 conmon[5376]: conmon 7f782dac9d06dbf9638b <ninfo>: container 5387 exited with status 137
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.243753860Z" level=info msg="Stopped container 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79: ingress-nginx/ingress-nginx-controller-7c6974c4d8-4znk9/controller" id=e0aa3c7d-6707-4579-99b7-b634f9ceff45 name=/runtime.v1.RuntimeService/StopContainer
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.244296042Z" level=info msg="Stopping pod sandbox: 5d03909035855a357ccb046ca165e74e7d83ac6aa8d05fa80a5abe7154d4bae9" id=66482975-e17e-45aa-907a-83fb8b128b8a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.248884136Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-B5CI55C32NORQUOK - [0:0]\n:KUBE-HP-NIOQE6GIDHAKKJIZ - [0:0]\n-X KUBE-HP-B5CI55C32NORQUOK\n-X KUBE-HP-NIOQE6GIDHAKKJIZ\nCOMMIT\n"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.250565269Z" level=info msg="Closing host port tcp:80"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.250614811Z" level=info msg="Closing host port tcp:443"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.252243506Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.252271330Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.252433248Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-4znk9 Namespace:ingress-nginx ID:5d03909035855a357ccb046ca165e74e7d83ac6aa8d05fa80a5abe7154d4bae9 UID:36ce132a-2be5-4c19-a052-fc9c8c972b13 NetNS:/var/run/netns/49b35d2f-ff44-4bfd-ae83-7ad33ab569ba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.252587438Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-4znk9 from CNI network \"kindnet\" (type=ptp)"
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.283663947Z" level=info msg="Stopped pod sandbox: 5d03909035855a357ccb046ca165e74e7d83ac6aa8d05fa80a5abe7154d4bae9" id=66482975-e17e-45aa-907a-83fb8b128b8a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.367966196Z" level=info msg="Removing container: 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79" id=311aa2d3-985e-475d-a1b4-3fc22c5c5171 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:36:56 addons-862145 crio[878]: time="2023-11-07 23:36:56.386759625Z" level=info msg="Removed container 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79: ingress-nginx/ingress-nginx-controller-7c6974c4d8-4znk9/controller" id=311aa2d3-985e-475d-a1b4-3fc22c5c5171 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dfafbbda4bf1       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             9 seconds ago       Exited              hello-world-app           2                   4cfba0b870a12       hello-world-app-5d77478584-tszj6
	3e008f8f90a65       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                              2 minutes ago       Running             nginx                     0                   0f70707746ce9       nginx
	7631cb49f7af0       ghcr.io/headlamp-k8s/headlamp@sha256:8e813897da00c345b1169d624b32e2367e5da1dbbffe33226f8a92973b816b50                        3 minutes ago       Running             headlamp                  0                   bea1a903648da       headlamp-94b766c-bvwfw
	14b73527abb19       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 4 minutes ago       Running             gcp-auth                  0                   4c4a461ce1fd2       gcp-auth-d4c87556c-5zx9l
	eb6fcce68ae44       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              patch                     0                   2b787e68eb9d9       ingress-nginx-admission-patch-xrd5d
	d38c8d742fef6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              create                    0                   e643baee4830c       ingress-nginx-admission-create-6nzjh
	155073746b544       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   62aa868e66cda       coredns-5dd5756b68-qbq8g
	805090a0f3ec2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   7caf115b5bbd6       storage-provisioner
	a77a7e512f8ce       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                             5 minutes ago       Running             kube-proxy                0                   204c53cc14a67       kube-proxy-mlpwh
	2c4ed593ab284       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   d62b68af90113       kindnet-lmzp5
	f5835f38477f4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             6 minutes ago       Running             etcd                      0                   f796210e8338e       etcd-addons-862145
	ec771611107ca       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                             6 minutes ago       Running             kube-scheduler            0                   80a4d35543c21       kube-scheduler-addons-862145
	551416ccd15b5       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                             6 minutes ago       Running             kube-controller-manager   0                   e3e687499b6ea       kube-controller-manager-addons-862145
	a97f0665e25f6       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                             6 minutes ago       Running             kube-apiserver            0                   b4487be36da49       kube-apiserver-addons-862145
	
	* 
	* ==> coredns [155073746b5442e6047657ccb4e4af078b67a786f26a11616e7207ec0f8378e9] <==
	* [INFO] 10.244.0.18:55487 - 26721 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050535s
	[INFO] 10.244.0.18:55487 - 48479 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000114739s
	[INFO] 10.244.0.18:57514 - 9590 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001944712s
	[INFO] 10.244.0.18:55487 - 60033 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001151251s
	[INFO] 10.244.0.18:57514 - 48056 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128138s
	[INFO] 10.244.0.18:55487 - 10909 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045143s
	[INFO] 10.244.0.18:55487 - 12332 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059971s
	[INFO] 10.244.0.18:53845 - 51662 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000116536s
	[INFO] 10.244.0.18:46748 - 5382 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038966s
	[INFO] 10.244.0.18:53845 - 43936 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051281s
	[INFO] 10.244.0.18:53845 - 22498 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046006s
	[INFO] 10.244.0.18:53845 - 58899 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045366s
	[INFO] 10.244.0.18:53845 - 6473 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053965s
	[INFO] 10.244.0.18:46748 - 42197 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049091s
	[INFO] 10.244.0.18:53845 - 10158 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032049s
	[INFO] 10.244.0.18:46748 - 55921 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058042s
	[INFO] 10.244.0.18:46748 - 59091 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040106s
	[INFO] 10.244.0.18:46748 - 61928 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000331107s
	[INFO] 10.244.0.18:53845 - 53736 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001507129s
	[INFO] 10.244.0.18:46748 - 27282 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056779s
	[INFO] 10.244.0.18:53845 - 49786 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001472101s
	[INFO] 10.244.0.18:46748 - 63402 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001210302s
	[INFO] 10.244.0.18:53845 - 15072 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075929s
	[INFO] 10.244.0.18:46748 - 49287 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001129114s
	[INFO] 10.244.0.18:46748 - 14745 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006418s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-862145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-862145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=addons-862145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_30_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-862145
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:30:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-862145
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:36:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:36:59 +0000   Tue, 07 Nov 2023 23:30:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:36:59 +0000   Tue, 07 Nov 2023 23:30:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:36:59 +0000   Tue, 07 Nov 2023 23:30:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:36:59 +0000   Tue, 07 Nov 2023 23:31:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-862145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 02425d1ccf964608a6c60bb5c24c7061
	  System UUID:                8274b5ba-f7eb-4ac1-9522-e84844277b84
	  Boot ID:                    b7db73c9-0d39-49c2-bed0-71d8dac21d90
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-tszj6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-5zx9l                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  headlamp                    headlamp-94b766c-bvwfw                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 coredns-5dd5756b68-qbq8g                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m54s
	  kube-system                 etcd-addons-862145                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m10s
	  kube-system                 kindnet-lmzp5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m56s
	  kube-system                 kube-apiserver-addons-862145             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-controller-manager-addons-862145    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-proxy-mlpwh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-scheduler-addons-862145             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node addons-862145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node addons-862145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x8 over 6m17s)  kubelet          Node addons-862145 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m10s                  kubelet          Node addons-862145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s                  kubelet          Node addons-862145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s                  kubelet          Node addons-862145 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m58s                  node-controller  Node addons-862145 event: Registered Node addons-862145 in Controller
	  Normal  NodeReady                5m23s                  kubelet          Node addons-862145 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000944] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=000000008fa34c3c
	[  +0.001054] FS-Cache: N-key=[8] '83d5c90000000000'
	[  +0.002968] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000002bce5c9d
	[  +0.001114] FS-Cache: O-key=[8] '83d5c90000000000'
	[  +0.000705] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=000000001ba85f21
	[  +0.001086] FS-Cache: N-key=[8] '83d5c90000000000'
	[Nov 7 22:25] FS-Cache: Duplicate cookie detected
	[  +0.000736] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=0000000055832ffe
	[  +0.001122] FS-Cache: O-key=[8] '82d5c90000000000'
	[  +0.000769] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=000000006496a620
	[  +0.001099] FS-Cache: N-key=[8] '82d5c90000000000'
	[  +0.420977] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000007e937395
	[  +0.001103] FS-Cache: O-key=[8] '88d5c90000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=000000009053f15c
	[  +0.001119] FS-Cache: N-key=[8] '88d5c90000000000'
	[Nov 7 22:50] hrtimer: interrupt took 17360670 ns
	[Nov 7 23:04] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [f5835f38477f4cd2a95df3336887310b2bace64d35bd4b71b142f418f6bfdfed] <==
	* {"level":"info","ts":"2023-11-07T23:30:46.009847Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:30:46.00987Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:30:46.05402Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:30:46.054125Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-11-07T23:31:06.183923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.691566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2023-11-07T23:31:06.194147Z","caller":"traceutil/trace.go:171","msg":"trace[1911881504] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:363; }","duration":"122.926288ms","start":"2023-11-07T23:31:06.071196Z","end":"2023-11-07T23:31:06.194124Z","steps":["trace[1911881504] 'agreement among raft nodes before linearized reading'  (duration: 72.54348ms)","trace[1911881504] 'range keys from in-memory index tree'  (duration: 40.11584ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:31:07.42154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.745998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-07T23:31:07.421674Z","caller":"traceutil/trace.go:171","msg":"trace[2129898899] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:388; }","duration":"100.901057ms","start":"2023-11-07T23:31:07.320759Z","end":"2023-11-07T23:31:07.42166Z","steps":["trace[2129898899] 'agreement among raft nodes before linearized reading'  (duration: 58.167056ms)","trace[2129898899] 'range keys from in-memory index tree'  (duration: 42.565256ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:31:07.422018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.331717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-862145\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-11-07T23:31:07.422095Z","caller":"traceutil/trace.go:171","msg":"trace[1815737906] range","detail":"{range_begin:/registry/minions/addons-862145; range_end:; response_count:1; response_revision:388; }","duration":"101.411256ms","start":"2023-11-07T23:31:07.320674Z","end":"2023-11-07T23:31:07.422085Z","steps":["trace[1815737906] 'agreement among raft nodes before linearized reading'  (duration: 58.233336ms)","trace[1815737906] 'range keys from in-memory index tree'  (duration: 43.065174ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:31:07.422824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.852451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-07T23:31:07.42357Z","caller":"traceutil/trace.go:171","msg":"trace[658509430] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:388; }","duration":"102.749394ms","start":"2023-11-07T23:31:07.320808Z","end":"2023-11-07T23:31:07.423557Z","steps":["trace[658509430] 'agreement among raft nodes before linearized reading'  (duration: 58.141702ms)","trace[658509430] 'range keys from in-memory index tree'  (duration: 43.699851ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-07T23:31:07.676851Z","caller":"traceutil/trace.go:171","msg":"trace[1242621650] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:403; }","duration":"124.676075ms","start":"2023-11-07T23:31:07.552165Z","end":"2023-11-07T23:31:07.676841Z","steps":["trace[1242621650] 'read index received'  (duration: 37.916243ms)","trace[1242621650] 'applied index is now lower than readState.Index'  (duration: 86.759101ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-07T23:31:07.677086Z","caller":"traceutil/trace.go:171","msg":"trace[1430013305] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"106.898235ms","start":"2023-11-07T23:31:07.57018Z","end":"2023-11-07T23:31:07.677078Z","steps":["trace[1430013305] 'process raft request'  (duration: 106.257838ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.68623Z","caller":"traceutil/trace.go:171","msg":"trace[1096956836] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"115.457836ms","start":"2023-11-07T23:31:07.570744Z","end":"2023-11-07T23:31:07.686202Z","steps":["trace[1096956836] 'process raft request'  (duration: 105.794465ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.686398Z","caller":"traceutil/trace.go:171","msg":"trace[1255471186] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"115.570344ms","start":"2023-11-07T23:31:07.570821Z","end":"2023-11-07T23:31:07.686391Z","steps":["trace[1255471186] 'process raft request'  (duration: 105.762507ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.686555Z","caller":"traceutil/trace.go:171","msg":"trace[430062537] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"115.04545ms","start":"2023-11-07T23:31:07.571502Z","end":"2023-11-07T23:31:07.686548Z","steps":["trace[430062537] 'process raft request'  (duration: 105.102508ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-07T23:31:07.686796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.634853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-11-07T23:31:07.686829Z","caller":"traceutil/trace.go:171","msg":"trace[1298083302] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"134.67811ms","start":"2023-11-07T23:31:07.552143Z","end":"2023-11-07T23:31:07.686821Z","steps":["trace[1298083302] 'agreement among raft nodes before linearized reading'  (duration: 134.599571ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.689934Z","caller":"traceutil/trace.go:171","msg":"trace[1284847627] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"124.479219ms","start":"2023-11-07T23:31:07.551885Z","end":"2023-11-07T23:31:07.676364Z","steps":["trace[1284847627] 'process raft request'  (duration: 38.465072ms)","trace[1284847627] 'compare'  (duration: 61.844368ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-07T23:31:07.861602Z","caller":"traceutil/trace.go:171","msg":"trace[1019641886] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"144.082595ms","start":"2023-11-07T23:31:07.717503Z","end":"2023-11-07T23:31:07.861586Z","steps":["trace[1019641886] 'process raft request'  (duration: 67.881446ms)","trace[1019641886] 'compare'  (duration: 75.50225ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-07T23:31:07.8713Z","caller":"traceutil/trace.go:171","msg":"trace[571954629] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"153.587477ms","start":"2023-11-07T23:31:07.717694Z","end":"2023-11-07T23:31:07.871281Z","steps":["trace[571954629] 'process raft request'  (duration: 143.441837ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.876385Z","caller":"traceutil/trace.go:171","msg":"trace[21322008] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"158.594263ms","start":"2023-11-07T23:31:07.717776Z","end":"2023-11-07T23:31:07.87637Z","steps":["trace[21322008] 'process raft request'  (duration: 143.435265ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.87653Z","caller":"traceutil/trace.go:171","msg":"trace[2087871774] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"152.339652ms","start":"2023-11-07T23:31:07.72418Z","end":"2023-11-07T23:31:07.87652Z","steps":["trace[2087871774] 'process raft request'  (duration: 137.090792ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:31:07.876552Z","caller":"traceutil/trace.go:171","msg":"trace[1645242836] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"152.228998ms","start":"2023-11-07T23:31:07.724318Z","end":"2023-11-07T23:31:07.876547Z","steps":["trace[1645242836] 'process raft request'  (duration: 136.980122ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [14b73527abb192a9d644328200077275ff919eab92201a52bb4d5062fc5f617e] <==
	* 2023/11/07 23:32:46 GCP Auth Webhook started!
	2023/11/07 23:33:02 Ready to marshal response ...
	2023/11/07 23:33:02 Ready to write response ...
	2023/11/07 23:33:02 Ready to marshal response ...
	2023/11/07 23:33:02 Ready to write response ...
	2023/11/07 23:33:06 Ready to marshal response ...
	2023/11/07 23:33:06 Ready to write response ...
	2023/11/07 23:33:12 Ready to marshal response ...
	2023/11/07 23:33:12 Ready to write response ...
	2023/11/07 23:33:18 Ready to marshal response ...
	2023/11/07 23:33:18 Ready to write response ...
	2023/11/07 23:33:18 Ready to marshal response ...
	2023/11/07 23:33:18 Ready to write response ...
	2023/11/07 23:33:18 Ready to marshal response ...
	2023/11/07 23:33:18 Ready to write response ...
	2023/11/07 23:33:37 Ready to marshal response ...
	2023/11/07 23:33:37 Ready to write response ...
	2023/11/07 23:33:55 Ready to marshal response ...
	2023/11/07 23:33:55 Ready to write response ...
	2023/11/07 23:34:14 Ready to marshal response ...
	2023/11/07 23:34:14 Ready to write response ...
	2023/11/07 23:36:35 Ready to marshal response ...
	2023/11/07 23:36:35 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:37:01 up  6:19,  0 users,  load average: 0.34, 1.61, 2.80
	Linux addons-862145 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2c4ed593ab284cd61ceb4d2e25979c4263c5e3043a15b14ca432fdb7faab34f6] <==
	* I1107 23:34:58.787345       1 main.go:227] handling current node
	I1107 23:35:08.791331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:35:08.810043       1 main.go:227] handling current node
	I1107 23:35:18.814700       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:35:18.814731       1 main.go:227] handling current node
	I1107 23:35:28.827542       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:35:28.827575       1 main.go:227] handling current node
	I1107 23:35:38.831371       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:35:38.831496       1 main.go:227] handling current node
	I1107 23:35:48.836167       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:35:48.836196       1 main.go:227] handling current node
	I1107 23:35:58.840702       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:35:58.840732       1 main.go:227] handling current node
	I1107 23:36:08.844885       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:36:08.844916       1 main.go:227] handling current node
	I1107 23:36:18.857066       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:36:18.857091       1 main.go:227] handling current node
	I1107 23:36:28.864487       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:36:28.864515       1 main.go:227] handling current node
	I1107 23:36:38.878362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:36:38.878501       1 main.go:227] handling current node
	I1107 23:36:48.890577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:36:48.890607       1 main.go:227] handling current node
	I1107 23:36:58.901745       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:36:58.901772       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a97f0665e25f6b4a4a4dd7eab1fe27a915ae30e81042bdd0c864b008fd71f567] <==
	* W1107 23:34:09.522622       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1107 23:34:12.891802       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.891946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.906639       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.906701       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.924572       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.924710       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.935800       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.937315       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.954073       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.954157       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.955737       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.955852       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.972707       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.972756       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:34:12.985813       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:34:12.985873       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1107 23:34:13.956119       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1107 23:34:13.972763       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1107 23:34:14.017434       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1107 23:34:14.146360       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1107 23:34:14.502497       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.161.166"}
	I1107 23:34:45.956701       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1107 23:36:35.671796       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.252.216"}
	E1107 23:36:52.394504       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4007a46f60), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400b8049b0), ResponseWriter:(*httpsnoop.rw)(0x400b8049b0), Flusher:(*httpsnoop.rw)(0x400b8049b0), CloseNotifier:(*httpsnoop.rw)(0x400b8049b0), Pusher:(*httpsnoop.rw)(0x400b8049b0)}}, encoder:(*versioning.codec)(0x400d43cf00), memAllocator:(*runtime.Allocator)(0x400d2aef60)})
	
	* 
	* ==> kube-controller-manager [551416ccd15b5436f7638457994f4dcc51d6e54b5d3e9ed2d12daba76c885aee] <==
	* W1107 23:36:08.364584       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:36:08.364700       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:36:13.323794       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:36:13.323900       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:36:27.371030       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:36:27.371066       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:36:35.377785       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1107 23:36:35.404997       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-tszj6"
	I1107 23:36:35.410425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.461743ms"
	I1107 23:36:35.444045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="33.30075ms"
	I1107 23:36:35.444389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.402µs"
	I1107 23:36:35.445141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.707µs"
	I1107 23:36:39.362825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.443µs"
	I1107 23:36:40.345639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.342µs"
	I1107 23:36:41.349001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.263µs"
	W1107 23:36:44.356525       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:36:44.357086       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:36:47.177316       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:36:47.177357       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:36:49.656412       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:36:49.656446       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:36:52.379170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.617µs"
	I1107 23:36:53.053734       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1107 23:36:53.056686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="8.369µs"
	I1107 23:36:53.062264       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	* 
	* ==> kube-proxy [a77a7e512f8ce062fa6c2f5850ce2202772d18fd42591199a8bd7d4a9c2a203f] <==
	* I1107 23:31:09.252548       1 server_others.go:69] "Using iptables proxy"
	I1107 23:31:09.366423       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1107 23:31:09.631967       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1107 23:31:09.647630       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:31:09.647734       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1107 23:31:09.647767       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1107 23:31:09.647914       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:31:09.655991       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:31:09.656723       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:31:09.662172       1 config.go:188] "Starting service config controller"
	I1107 23:31:09.671616       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:31:09.671637       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:31:09.662396       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:31:09.671676       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:31:09.671682       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:31:09.670084       1 config.go:315] "Starting node config controller"
	I1107 23:31:09.671745       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:31:09.771815       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ec771611107cab1c64abbea640f6d31732b3dfa0d4d4cf1d45fcf57b0235b54b] <==
	* W1107 23:30:48.890471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:30:48.891176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:30:48.891187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 23:30:48.891228       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:30:48.891159       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:30:48.890510       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1107 23:30:48.891265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 23:30:48.890596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:30:48.891289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 23:30:48.890662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 23:30:48.891319       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:30:48.891331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:30:48.890722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:30:48.891350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 23:30:48.890780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:30:48.891365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 23:30:48.890919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:30:48.891407       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:30:48.890977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:30:48.891420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1107 23:30:48.891052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 23:30:48.891442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 23:30:48.891107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:30:48.891454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1107 23:30:50.576975       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 07 23:36:51 addons-862145 kubelet[1339]: E1107 23:36:51.756180    1339 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fc8a63cd2ed425cf5eedaf59ed6c158ff4b4297e0cb140a6197d86d658a256af/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fc8a63cd2ed425cf5eedaf59ed6c158ff4b4297e0cb140a6197d86d658a256af/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:36:51 addons-862145 kubelet[1339]: E1107 23:36:51.759223    1339 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc5edba0ca547b4efb82cc5b9fc00e4bf16ad5a7a74023035560bf97a2966ccd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc5edba0ca547b4efb82cc5b9fc00e4bf16ad5a7a74023035560bf97a2966ccd/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:36:51 addons-862145 kubelet[1339]: E1107 23:36:51.770468    1339 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dfdedc91b2bb2b72b7fb55f1b69ab62210d5832eb0092e3826a0063f05b53f52/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dfdedc91b2bb2b72b7fb55f1b69ab62210d5832eb0092e3826a0063f05b53f52/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:36:51 addons-862145 kubelet[1339]: E1107 23:36:51.790026    1339 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5d15bd7b018d08b621e3a3b773c3d91d19e303f1f26c1a37688661599d0eeea8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5d15bd7b018d08b621e3a3b773c3d91d19e303f1f26c1a37688661599d0eeea8/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:36:51 addons-862145 kubelet[1339]: E1107 23:36:51.794277    1339 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc5edba0ca547b4efb82cc5b9fc00e4bf16ad5a7a74023035560bf97a2966ccd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc5edba0ca547b4efb82cc5b9fc00e4bf16ad5a7a74023035560bf97a2966ccd/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:36:51 addons-862145 kubelet[1339]: E1107 23:36:51.795472    1339 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/16cd83929e380104c033daeeb39eaa04d6513bc64e5dea9de623a6ed520ec21c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/16cd83929e380104c033daeeb39eaa04d6513bc64e5dea9de623a6ed520ec21c/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:36:51 addons-862145 kubelet[1339]: I1107 23:36:51.817716    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zj5h7\" (UniqueName: \"kubernetes.io/projected/795e47bf-77e6-4fa6-b2c9-f188d1ec7de2-kube-api-access-zj5h7\") on node \"addons-862145\" DevicePath \"\""
	Nov 07 23:36:51 addons-862145 kubelet[1339]: I1107 23:36:51.961613    1339 scope.go:117] "RemoveContainer" containerID="ace48f6af706017e73d54f795bedf327b94a60c44551f84c49766583c2906b18"
	Nov 07 23:36:51 addons-862145 kubelet[1339]: I1107 23:36:51.981305    1339 scope.go:117] "RemoveContainer" containerID="4239332a8d5a4d794d189535cff110b43692c20f59e09c212e5c3b314915fec2"
	Nov 07 23:36:52 addons-862145 kubelet[1339]: I1107 23:36:52.358255    1339 scope.go:117] "RemoveContainer" containerID="9dfafbbda4bf17738a94a6e082388e55530f9b404c0cb435c32d5f2b85c7c7fb"
	Nov 07 23:36:52 addons-862145 kubelet[1339]: E1107 23:36:52.358552    1339 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-tszj6_default(66bb4293-a6c9-4b8b-859c-023406f446f1)\"" pod="default/hello-world-app-5d77478584-tszj6" podUID="66bb4293-a6c9-4b8b-859c-023406f446f1"
	Nov 07 23:36:53 addons-862145 kubelet[1339]: I1107 23:36:53.510305    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2f146ca0-4ead-4829-97ce-069b393e322b" path="/var/lib/kubelet/pods/2f146ca0-4ead-4829-97ce-069b393e322b/volumes"
	Nov 07 23:36:53 addons-862145 kubelet[1339]: I1107 23:36:53.510717    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="31dcc102-3b9f-4dc4-a535-8f0bacb55f2f" path="/var/lib/kubelet/pods/31dcc102-3b9f-4dc4-a535-8f0bacb55f2f/volumes"
	Nov 07 23:36:53 addons-862145 kubelet[1339]: I1107 23:36:53.511057    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="795e47bf-77e6-4fa6-b2c9-f188d1ec7de2" path="/var/lib/kubelet/pods/795e47bf-77e6-4fa6-b2c9-f188d1ec7de2/volumes"
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.366346    1339 scope.go:117] "RemoveContainer" containerID="7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79"
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.387029    1339 scope.go:117] "RemoveContainer" containerID="7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79"
	Nov 07 23:36:56 addons-862145 kubelet[1339]: E1107 23:36:56.387555    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79\": container with ID starting with 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79 not found: ID does not exist" containerID="7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79"
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.387608    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79"} err="failed to get container status \"7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79\": rpc error: code = NotFound desc = could not find container \"7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79\": container with ID starting with 7f782dac9d06dbf9638bbcd6ab3bff8466cab10881b9ca4f24962c1345e14e79 not found: ID does not exist"
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.456126    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36ce132a-2be5-4c19-a052-fc9c8c972b13-webhook-cert\") pod \"36ce132a-2be5-4c19-a052-fc9c8c972b13\" (UID: \"36ce132a-2be5-4c19-a052-fc9c8c972b13\") "
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.456205    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmx4\" (UniqueName: \"kubernetes.io/projected/36ce132a-2be5-4c19-a052-fc9c8c972b13-kube-api-access-npmx4\") pod \"36ce132a-2be5-4c19-a052-fc9c8c972b13\" (UID: \"36ce132a-2be5-4c19-a052-fc9c8c972b13\") "
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.458694    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ce132a-2be5-4c19-a052-fc9c8c972b13-kube-api-access-npmx4" (OuterVolumeSpecName: "kube-api-access-npmx4") pod "36ce132a-2be5-4c19-a052-fc9c8c972b13" (UID: "36ce132a-2be5-4c19-a052-fc9c8c972b13"). InnerVolumeSpecName "kube-api-access-npmx4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.459503    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ce132a-2be5-4c19-a052-fc9c8c972b13-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "36ce132a-2be5-4c19-a052-fc9c8c972b13" (UID: "36ce132a-2be5-4c19-a052-fc9c8c972b13"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.557270    1339 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36ce132a-2be5-4c19-a052-fc9c8c972b13-webhook-cert\") on node \"addons-862145\" DevicePath \"\""
	Nov 07 23:36:56 addons-862145 kubelet[1339]: I1107 23:36:56.557314    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-npmx4\" (UniqueName: \"kubernetes.io/projected/36ce132a-2be5-4c19-a052-fc9c8c972b13-kube-api-access-npmx4\") on node \"addons-862145\" DevicePath \"\""
	Nov 07 23:36:57 addons-862145 kubelet[1339]: I1107 23:36:57.510343    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="36ce132a-2be5-4c19-a052-fc9c8c972b13" path="/var/lib/kubelet/pods/36ce132a-2be5-4c19-a052-fc9c8c972b13/volumes"
	
	* 
	* ==> storage-provisioner [805090a0f3ec2e597e9b2f922797cd67400ab0f74cef20ad95a5210e18231232] <==
	* I1107 23:31:39.704839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:31:39.725495       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:31:39.725737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:31:39.735662       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:31:39.735843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-862145_96b80437-3a98-461f-9b5e-75ec7ffd30c5!
	I1107 23:31:39.736793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2fd227ad-72f3-499a-b961-d3a0c5b0265f", APIVersion:"v1", ResourceVersion:"877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-862145_96b80437-3a98-461f-9b5e-75ec7ffd30c5 became leader
	I1107 23:31:39.847099       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-862145_96b80437-3a98-461f-9b5e-75ec7ffd30c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-862145 -n addons-862145
helpers_test.go:261: (dbg) Run:  kubectl --context addons-862145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (297.881164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-878254 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-878254 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.479846985s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-878254 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-878254 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [552ff040-7ff0-4108-9289-1c2305cd2da5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [552ff040-7ff0-4108-9289-1c2305cd2da5] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.025873466s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1107 23:46:19.456500 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:19.461816 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:19.472093 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:19.492416 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:19.532668 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:19.613030 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:19.773664 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:20.094311 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:20.735233 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:22.015712 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:24.575932 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:29.696883 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:46:39.938005 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-878254 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.981556892s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-878254 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1107 23:47:00.419210 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.01982415s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons disable ingress-dns --alsologtostderr -v=1: (2.101104603s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons disable ingress --alsologtostderr -v=1: (7.610171638s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-878254
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-878254:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c68b786ac001bd59b5228a2c7508c75189f9820d0244c433b487175ea12b7eb4",
	        "Created": "2023-11-07T23:42:50.645128056Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1484453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:42:51.020733076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/c68b786ac001bd59b5228a2c7508c75189f9820d0244c433b487175ea12b7eb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c68b786ac001bd59b5228a2c7508c75189f9820d0244c433b487175ea12b7eb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/c68b786ac001bd59b5228a2c7508c75189f9820d0244c433b487175ea12b7eb4/hosts",
	        "LogPath": "/var/lib/docker/containers/c68b786ac001bd59b5228a2c7508c75189f9820d0244c433b487175ea12b7eb4/c68b786ac001bd59b5228a2c7508c75189f9820d0244c433b487175ea12b7eb4-json.log",
	        "Name": "/ingress-addon-legacy-878254",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-878254:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-878254",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/66d78468222ab0f6f4b76b17938a4a1ef71970650ec4b33774f39d970ccf4b45-init/diff:/var/lib/docker/overlay2/8e491d7cb3241f95e04087f3d63eb57f6d89d07f6c4a9f8c41570cc55f16b330/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66d78468222ab0f6f4b76b17938a4a1ef71970650ec4b33774f39d970ccf4b45/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66d78468222ab0f6f4b76b17938a4a1ef71970650ec4b33774f39d970ccf4b45/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66d78468222ab0f6f4b76b17938a4a1ef71970650ec4b33774f39d970ccf4b45/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-878254",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-878254/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-878254",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-878254",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-878254",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "510e3e3e477367eb8056eca61ea6f6f28b97cef79ee3e06d835abeeea0bbcb6e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34080"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/510e3e3e4773",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-878254": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c68b786ac001",
	                        "ingress-addon-legacy-878254"
	                    ],
	                    "NetworkID": "8264cf51d4e0dbe18f61a5e76a7b513fe20adbad8e22f6111d256cb3b5f62aff",
	                    "EndpointID": "f8eb7d8692a8a3ad710eac988b415de0aa12235bff13a15109b886f7651c82cf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-878254 -n ingress-addon-legacy-878254
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-878254 logs -n 25: (1.406640656s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-421985 image load --daemon                                  | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-421985               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985 image ls                                             | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	| image   | functional-421985 image load --daemon                                  | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-421985               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985 image ls                                             | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	| image   | functional-421985 image save                                           | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-421985               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985 image rm                                             | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-421985               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985 image ls                                             | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	| image   | functional-421985 image load                                           | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985 image ls                                             | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	| image   | functional-421985 image save --daemon                                  | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-421985               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985                                                      | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985                                                      | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985                                                      | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-421985                                                      | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-421985 ssh pgrep                                            | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-421985 image build -t                                       | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	|         | localhost/my-image:functional-421985                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-421985 image ls                                             | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	| delete  | -p functional-421985                                                   | functional-421985           | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:42 UTC |
	| start   | -p ingress-addon-legacy-878254                                         | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:42 UTC | 07 Nov 23 23:44 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-878254                                            | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:44 UTC | 07 Nov 23 23:44 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-878254                                            | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:44 UTC | 07 Nov 23 23:44 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-878254                                            | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:44 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-878254 ip                                         | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:46 UTC | 07 Nov 23 23:46 UTC |
	| addons  | ingress-addon-legacy-878254                                            | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:47 UTC | 07 Nov 23 23:47 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-878254                                            | ingress-addon-legacy-878254 | jenkins | v1.32.0 | 07 Nov 23 23:47 UTC | 07 Nov 23 23:47 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:42:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:42:32.434374 1483996 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:42:32.434582 1483996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:42:32.434612 1483996 out.go:309] Setting ErrFile to fd 2...
	I1107 23:42:32.434632 1483996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:42:32.434949 1483996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:42:32.435385 1483996 out.go:303] Setting JSON to false
	I1107 23:42:32.436497 1483996 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23102,"bootTime":1699377451,"procs":391,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:42:32.436599 1483996 start.go:138] virtualization:  
	I1107 23:42:32.439117 1483996 out.go:177] * [ingress-addon-legacy-878254] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:42:32.441695 1483996 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:42:32.441856 1483996 notify.go:220] Checking for updates...
	I1107 23:42:32.445253 1483996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:42:32.447335 1483996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:42:32.449036 1483996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:42:32.450831 1483996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:42:32.452710 1483996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:42:32.454738 1483996 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:42:32.479341 1483996 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:42:32.479452 1483996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:42:32.562722 1483996 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-07 23:42:32.552371976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:42:32.562872 1483996 docker.go:295] overlay module found
	I1107 23:42:32.565311 1483996 out.go:177] * Using the docker driver based on user configuration
	I1107 23:42:32.567291 1483996 start.go:298] selected driver: docker
	I1107 23:42:32.567307 1483996 start.go:902] validating driver "docker" against <nil>
	I1107 23:42:32.567334 1483996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:42:32.568026 1483996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:42:32.644229 1483996 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-07 23:42:32.630627754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:42:32.644378 1483996 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:42:32.644602 1483996 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:42:32.650788 1483996 out.go:177] * Using Docker driver with root privileges
	I1107 23:42:32.652400 1483996 cni.go:84] Creating CNI manager for ""
	I1107 23:42:32.652417 1483996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:42:32.652428 1483996 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:42:32.652445 1483996 start_flags.go:323] config:
	{Name:ingress-addon-legacy-878254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-878254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:42:32.654457 1483996 out.go:177] * Starting control plane node ingress-addon-legacy-878254 in cluster ingress-addon-legacy-878254
	I1107 23:42:32.656111 1483996 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:42:32.657804 1483996 out.go:177] * Pulling base image ...
	I1107 23:42:32.659920 1483996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:42:32.659999 1483996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:42:32.681244 1483996 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:42:32.681283 1483996 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:42:32.762141 1483996 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1107 23:42:32.762165 1483996 cache.go:56] Caching tarball of preloaded images
	I1107 23:42:32.762329 1483996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:42:32.764309 1483996 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1107 23:42:32.765959 1483996 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:42:32.920994 1483996 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1107 23:42:42.412354 1483996 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:42:42.412467 1483996 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:42:43.601777 1483996 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1107 23:42:43.602240 1483996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/config.json ...
	I1107 23:42:43.602275 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/config.json: {Name:mke34835aa3628c27457c8de5dbd5e3f6b1c15e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:42:43.602453 1483996 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:42:43.602525 1483996 start.go:365] acquiring machines lock for ingress-addon-legacy-878254: {Name:mk62565e54b36132ba03a532498c28a066525ebe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:42:43.602591 1483996 start.go:369] acquired machines lock for "ingress-addon-legacy-878254" in 50.584µs
	I1107 23:42:43.602616 1483996 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-878254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-878254 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:42:43.602683 1483996 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:42:43.604973 1483996 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1107 23:42:43.605220 1483996 start.go:159] libmachine.API.Create for "ingress-addon-legacy-878254" (driver="docker")
	I1107 23:42:43.605246 1483996 client.go:168] LocalClient.Create starting
	I1107 23:42:43.605319 1483996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem
	I1107 23:42:43.605354 1483996 main.go:141] libmachine: Decoding PEM data...
	I1107 23:42:43.605370 1483996 main.go:141] libmachine: Parsing certificate...
	I1107 23:42:43.605427 1483996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem
	I1107 23:42:43.605451 1483996 main.go:141] libmachine: Decoding PEM data...
	I1107 23:42:43.605465 1483996 main.go:141] libmachine: Parsing certificate...
	I1107 23:42:43.605817 1483996 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-878254 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:42:43.623451 1483996 cli_runner.go:211] docker network inspect ingress-addon-legacy-878254 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:42:43.623540 1483996 network_create.go:281] running [docker network inspect ingress-addon-legacy-878254] to gather additional debugging logs...
	I1107 23:42:43.623561 1483996 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-878254
	W1107 23:42:43.640508 1483996 cli_runner.go:211] docker network inspect ingress-addon-legacy-878254 returned with exit code 1
	I1107 23:42:43.640545 1483996 network_create.go:284] error running [docker network inspect ingress-addon-legacy-878254]: docker network inspect ingress-addon-legacy-878254: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-878254 not found
	I1107 23:42:43.640569 1483996 network_create.go:286] output of [docker network inspect ingress-addon-legacy-878254]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-878254 not found
	
	** /stderr **
	I1107 23:42:43.640677 1483996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:42:43.658418 1483996 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40005732c0}
	I1107 23:42:43.658458 1483996 network_create.go:124] attempt to create docker network ingress-addon-legacy-878254 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 23:42:43.658515 1483996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-878254 ingress-addon-legacy-878254
	I1107 23:42:43.730589 1483996 network_create.go:108] docker network ingress-addon-legacy-878254 192.168.49.0/24 created
	I1107 23:42:43.730623 1483996 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-878254" container
	I1107 23:42:43.730698 1483996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:42:43.747099 1483996 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-878254 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-878254 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:42:43.765614 1483996 oci.go:103] Successfully created a docker volume ingress-addon-legacy-878254
	I1107 23:42:43.765702 1483996 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-878254-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-878254 --entrypoint /usr/bin/test -v ingress-addon-legacy-878254:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:42:45.402443 1483996 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-878254-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-878254 --entrypoint /usr/bin/test -v ingress-addon-legacy-878254:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.636696741s)
	I1107 23:42:45.402474 1483996 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-878254
	I1107 23:42:45.402503 1483996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:42:45.402528 1483996 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:42:45.402616 1483996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-878254:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:42:50.554606 1483996 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-878254:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.151944133s)
	I1107 23:42:50.554640 1483996 kic.go:203] duration metric: took 5.152109 seconds to extract preloaded images to volume
	W1107 23:42:50.554793 1483996 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:42:50.554908 1483996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:42:50.629089 1483996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-878254 --name ingress-addon-legacy-878254 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-878254 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-878254 --network ingress-addon-legacy-878254 --ip 192.168.49.2 --volume ingress-addon-legacy-878254:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:42:51.029507 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Running}}
	I1107 23:42:51.062548 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Status}}
	I1107 23:42:51.090865 1483996 cli_runner.go:164] Run: docker exec ingress-addon-legacy-878254 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:42:51.178868 1483996 oci.go:144] the created container "ingress-addon-legacy-878254" has a running status.
	I1107 23:42:51.178896 1483996 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa...
	I1107 23:42:51.761671 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:42:51.761773 1483996 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:42:51.790663 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Status}}
	I1107 23:42:51.818252 1483996 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:42:51.818272 1483996 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-878254 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:42:51.904057 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Status}}
	I1107 23:42:51.947710 1483996 machine.go:88] provisioning docker machine ...
	I1107 23:42:51.947744 1483996 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-878254"
	I1107 23:42:51.947818 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:51.974969 1483996 main.go:141] libmachine: Using SSH client type: native
	I1107 23:42:51.975404 1483996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34083 <nil> <nil>}
	I1107 23:42:51.975417 1483996 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-878254 && echo "ingress-addon-legacy-878254" | sudo tee /etc/hostname
	I1107 23:42:52.206045 1483996 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-878254
	
	I1107 23:42:52.206165 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:52.228837 1483996 main.go:141] libmachine: Using SSH client type: native
	I1107 23:42:52.229348 1483996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34083 <nil> <nil>}
	I1107 23:42:52.229371 1483996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-878254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-878254/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-878254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:42:52.371723 1483996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:42:52.371752 1483996 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1107 23:42:52.371807 1483996 ubuntu.go:177] setting up certificates
	I1107 23:42:52.371816 1483996 provision.go:83] configureAuth start
	I1107 23:42:52.371888 1483996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-878254
	I1107 23:42:52.394328 1483996 provision.go:138] copyHostCerts
	I1107 23:42:52.394380 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1107 23:42:52.394413 1483996 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1107 23:42:52.394428 1483996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1107 23:42:52.394504 1483996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1107 23:42:52.394601 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1107 23:42:52.394620 1483996 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1107 23:42:52.394629 1483996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1107 23:42:52.394657 1483996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1107 23:42:52.394703 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1107 23:42:52.394726 1483996 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1107 23:42:52.394738 1483996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1107 23:42:52.394770 1483996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1107 23:42:52.394819 1483996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-878254 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-878254]
	I1107 23:42:52.717431 1483996 provision.go:172] copyRemoteCerts
	I1107 23:42:52.717501 1483996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:42:52.717543 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:52.736272 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:42:52.829073 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:42:52.829137 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:42:52.857371 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:42:52.857479 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 23:42:52.886750 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:42:52.886813 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1107 23:42:52.918046 1483996 provision.go:86] duration metric: configureAuth took 546.215395ms
	I1107 23:42:52.918071 1483996 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:42:52.918268 1483996 config.go:182] Loaded profile config "ingress-addon-legacy-878254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1107 23:42:52.918374 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:52.938181 1483996 main.go:141] libmachine: Using SSH client type: native
	I1107 23:42:52.938602 1483996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34083 <nil> <nil>}
	I1107 23:42:52.938623 1483996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:42:53.221621 1483996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:42:53.221701 1483996 machine.go:91] provisioned docker machine in 1.273963355s
	I1107 23:42:53.221726 1483996 client.go:171] LocalClient.Create took 9.616473531s
	I1107 23:42:53.221771 1483996 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-878254" took 9.616551684s
	I1107 23:42:53.221796 1483996 start.go:300] post-start starting for "ingress-addon-legacy-878254" (driver="docker")
	I1107 23:42:53.221825 1483996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:42:53.221944 1483996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:42:53.222053 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:53.242173 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:42:53.337592 1483996 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:42:53.342214 1483996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:42:53.342255 1483996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:42:53.342268 1483996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:42:53.342275 1483996 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:42:53.342291 1483996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1107 23:42:53.342364 1483996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1107 23:42:53.342451 1483996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1107 23:42:53.342464 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> /etc/ssl/certs/14550192.pem
	I1107 23:42:53.342588 1483996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:42:53.353745 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1107 23:42:53.384485 1483996 start.go:303] post-start completed in 162.656539ms
	I1107 23:42:53.384861 1483996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-878254
	I1107 23:42:53.402453 1483996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/config.json ...
	I1107 23:42:53.402733 1483996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:42:53.402788 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:53.420093 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:42:53.508424 1483996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:42:53.514420 1483996 start.go:128] duration metric: createHost completed in 9.911720596s
	I1107 23:42:53.514449 1483996 start.go:83] releasing machines lock for "ingress-addon-legacy-878254", held for 9.911845486s
	I1107 23:42:53.514523 1483996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-878254
	I1107 23:42:53.532151 1483996 ssh_runner.go:195] Run: cat /version.json
	I1107 23:42:53.532210 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:53.532477 1483996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:42:53.532535 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:42:53.551824 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:42:53.571547 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:42:53.642964 1483996 ssh_runner.go:195] Run: systemctl --version
	I1107 23:42:53.840062 1483996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:42:53.985569 1483996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:42:53.991245 1483996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:42:54.022545 1483996 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:42:54.022644 1483996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:42:54.064677 1483996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:42:54.064709 1483996 start.go:472] detecting cgroup driver to use...
	I1107 23:42:54.064785 1483996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:42:54.064893 1483996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:42:54.084758 1483996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:42:54.099505 1483996 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:42:54.099576 1483996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:42:54.116876 1483996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:42:54.134386 1483996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:42:54.238143 1483996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:42:54.347661 1483996 docker.go:219] disabling docker service ...
	I1107 23:42:54.347789 1483996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:42:54.368822 1483996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:42:54.382550 1483996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:42:54.483883 1483996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:42:54.585633 1483996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:42:54.600247 1483996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:42:54.620795 1483996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1107 23:42:54.620900 1483996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:42:54.633006 1483996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:42:54.633080 1483996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:42:54.645889 1483996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:42:54.658495 1483996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:42:54.671112 1483996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:42:54.682549 1483996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:42:54.693243 1483996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:42:54.703607 1483996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:42:54.810405 1483996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:42:54.937942 1483996 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:42:54.938061 1483996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:42:54.943120 1483996 start.go:540] Will wait 60s for crictl version
	I1107 23:42:54.943183 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:42:54.947682 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:42:54.992109 1483996 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:42:54.992225 1483996 ssh_runner.go:195] Run: crio --version
	I1107 23:42:55.053229 1483996 ssh_runner.go:195] Run: crio --version
	I1107 23:42:55.110340 1483996 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1107 23:42:55.112134 1483996 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-878254 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:42:55.131140 1483996 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1107 23:42:55.135849 1483996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:42:55.150218 1483996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:42:55.150290 1483996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:42:55.205161 1483996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:42:55.205240 1483996 ssh_runner.go:195] Run: which lz4
	I1107 23:42:55.209872 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1107 23:42:55.209994 1483996 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:42:55.214644 1483996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:42:55.214681 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1107 23:42:57.448022 1483996 crio.go:444] Took 2.238086 seconds to copy over tarball
	I1107 23:42:57.448107 1483996 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:43:00.267564 1483996 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.819425615s)
	I1107 23:43:00.267594 1483996 crio.go:451] Took 2.819545 seconds to extract the tarball
	I1107 23:43:00.267605 1483996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:43:00.357766 1483996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:43:00.402439 1483996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:43:00.402463 1483996 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 23:43:00.402547 1483996 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:43:00.402606 1483996 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:43:00.402794 1483996 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1107 23:43:00.402841 1483996 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:43:00.402920 1483996 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:43:00.403004 1483996 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:43:00.402798 1483996 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:43:00.403007 1483996 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1107 23:43:00.404381 1483996 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1107 23:43:00.404902 1483996 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:43:00.405062 1483996 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1107 23:43:00.405127 1483996 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:43:00.405182 1483996 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:43:00.405227 1483996 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:43:00.405279 1483996 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:43:00.405423 1483996 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W1107 23:43:00.960309 1483996 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:00.960531 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1107 23:43:00.976642 1483996 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:00.976891 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1107 23:43:00.982679 1483996 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:00.982946 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:43:01.004315 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1107 23:43:01.004519 1483996 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:01.004660 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1107 23:43:01.014209 1483996 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:01.014394 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1107 23:43:01.026550 1483996 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:01.026772 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:43:01.068297 1483996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1107 23:43:01.068363 1483996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:43:01.068423 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.142949 1483996 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1107 23:43:01.142998 1483996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:43:01.143053 1483996 ssh_runner.go:195] Run: which crictl
	W1107 23:43:01.154578 1483996 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1107 23:43:01.154734 1483996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:43:01.207326 1483996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1107 23:43:01.207377 1483996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:43:01.207422 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.207507 1483996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1107 23:43:01.207526 1483996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:43:01.207547 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.207623 1483996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1107 23:43:01.207641 1483996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1107 23:43:01.207664 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.248850 1483996 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1107 23:43:01.248895 1483996 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1107 23:43:01.248968 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.255828 1483996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1107 23:43:01.255866 1483996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:43:01.255922 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.256014 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:43:01.256078 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1107 23:43:01.335083 1483996 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1107 23:43:01.335149 1483996 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:43:01.335209 1483996 ssh_runner.go:195] Run: which crictl
	I1107 23:43:01.335369 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1107 23:43:01.335447 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:43:01.335534 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:43:01.335619 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1107 23:43:01.335723 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1107 23:43:01.335791 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:43:01.335882 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1107 23:43:01.453487 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1107 23:43:01.453556 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1107 23:43:01.453595 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1107 23:43:01.465782 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1107 23:43:01.465894 1483996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:43:01.466039 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1107 23:43:01.531188 1483996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1107 23:43:01.531303 1483996 cache_images.go:92] LoadImages completed in 1.128825072s
	W1107 23:43:01.531394 1483996 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1107 23:43:01.531476 1483996 ssh_runner.go:195] Run: crio config
	I1107 23:43:01.589863 1483996 cni.go:84] Creating CNI manager for ""
	I1107 23:43:01.589886 1483996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:43:01.589941 1483996 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:43:01.589970 1483996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-878254 NodeName:ingress-addon-legacy-878254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1107 23:43:01.590167 1483996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-878254"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:43:01.590248 1483996 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-878254 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-878254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:43:01.590321 1483996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1107 23:43:01.601013 1483996 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:43:01.601099 1483996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:43:01.611480 1483996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1107 23:43:01.632372 1483996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1107 23:43:01.653359 1483996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1107 23:43:01.674639 1483996 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:43:01.679135 1483996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:43:01.692585 1483996 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254 for IP: 192.168.49.2
	I1107 23:43:01.692628 1483996 certs.go:190] acquiring lock for shared ca certs: {Name:mk4f8465cbc85ba57ebf3be6025d59928913c61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:01.692799 1483996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key
	I1107 23:43:01.692856 1483996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key
	I1107 23:43:01.692915 1483996 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.key
	I1107 23:43:01.692932 1483996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt with IP's: []
	I1107 23:43:02.179468 1483996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt ...
	I1107 23:43:02.179506 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: {Name:mk9effe3ed31b9552b903ef031e94a18662e59be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:02.179712 1483996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.key ...
	I1107 23:43:02.179727 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.key: {Name:mk21d6d1cbefded56d644aef90131a864f9f756e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:02.179822 1483996 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key.dd3b5fb2
	I1107 23:43:02.179841 1483996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:43:02.580243 1483996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt.dd3b5fb2 ...
	I1107 23:43:02.580275 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt.dd3b5fb2: {Name:mk5e5888a64ebf81c028cdc8be89290405d8701a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:02.580469 1483996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key.dd3b5fb2 ...
	I1107 23:43:02.580484 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key.dd3b5fb2: {Name:mk588a38f408432921847ffabf3ffdc7c4715362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:02.580570 1483996 certs.go:337] copying /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt
	I1107 23:43:02.580654 1483996 certs.go:341] copying /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key
	I1107 23:43:02.580723 1483996 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.key
	I1107 23:43:02.580741 1483996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.crt with IP's: []
	I1107 23:43:03.347689 1483996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.crt ...
	I1107 23:43:03.347723 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.crt: {Name:mk64e09cdd9bed756b76436ea9324e5ed0719b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:03.347946 1483996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.key ...
	I1107 23:43:03.347964 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.key: {Name:mk5c3f3122410b5bbf3a10dec8387574c19106d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:03.348057 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:43:03.348080 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:43:03.348093 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:43:03.348108 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:43:03.348122 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:43:03.348137 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:43:03.348164 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:43:03.348180 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:43:03.348238 1483996 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem (1338 bytes)
	W1107 23:43:03.348282 1483996 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019_empty.pem, impossibly tiny 0 bytes
	I1107 23:43:03.348292 1483996 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:43:03.348320 1483996 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:43:03.348358 1483996 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:43:03.348386 1483996 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem (1675 bytes)
	I1107 23:43:03.348436 1483996 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem (1708 bytes)
	I1107 23:43:03.348472 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem -> /usr/share/ca-certificates/1455019.pem
	I1107 23:43:03.348484 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> /usr/share/ca-certificates/14550192.pem
	I1107 23:43:03.348496 1483996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:43:03.349104 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:43:03.379133 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:43:03.408339 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:43:03.438260 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:43:03.468260 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:43:03.497599 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:43:03.526342 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:43:03.555503 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:43:03.585231 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem --> /usr/share/ca-certificates/1455019.pem (1338 bytes)
	I1107 23:43:03.614642 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /usr/share/ca-certificates/14550192.pem (1708 bytes)
	I1107 23:43:03.644688 1483996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:43:03.674946 1483996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:43:03.696461 1483996 ssh_runner.go:195] Run: openssl version
	I1107 23:43:03.703761 1483996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455019.pem && ln -fs /usr/share/ca-certificates/1455019.pem /etc/ssl/certs/1455019.pem"
	I1107 23:43:03.715427 1483996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455019.pem
	I1107 23:43:03.720436 1483996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:38 /usr/share/ca-certificates/1455019.pem
	I1107 23:43:03.720507 1483996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455019.pem
	I1107 23:43:03.729151 1483996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455019.pem /etc/ssl/certs/51391683.0"
	I1107 23:43:03.740989 1483996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14550192.pem && ln -fs /usr/share/ca-certificates/14550192.pem /etc/ssl/certs/14550192.pem"
	I1107 23:43:03.753098 1483996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14550192.pem
	I1107 23:43:03.757965 1483996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:38 /usr/share/ca-certificates/14550192.pem
	I1107 23:43:03.758098 1483996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14550192.pem
	I1107 23:43:03.766901 1483996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14550192.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:43:03.778596 1483996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:43:03.790439 1483996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:43:03.795209 1483996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:30 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:43:03.795303 1483996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:43:03.804294 1483996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:43:03.815919 1483996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:43:03.820290 1483996 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:43:03.820343 1483996 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-878254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-878254 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:43:03.820413 1483996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:43:03.820477 1483996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:43:03.861846 1483996 cri.go:89] found id: ""
	I1107 23:43:03.861922 1483996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:43:03.872513 1483996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:43:03.883081 1483996 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:43:03.883150 1483996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:43:03.893691 1483996 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:43:03.893738 1483996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:43:03.952594 1483996 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1107 23:43:03.952697 1483996 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:43:04.010915 1483996 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:43:04.010987 1483996 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:43:04.011028 1483996 kubeadm.go:322] OS: Linux
	I1107 23:43:04.011079 1483996 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:43:04.011130 1483996 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:43:04.011181 1483996 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:43:04.011230 1483996 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:43:04.011279 1483996 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:43:04.011330 1483996 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:43:04.103823 1483996 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:43:04.103970 1483996 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:43:04.104104 1483996 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:43:04.354510 1483996 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:43:04.355826 1483996 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:43:04.356082 1483996 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:43:04.486424 1483996 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:43:04.490733 1483996 out.go:204]   - Generating certificates and keys ...
	I1107 23:43:04.490853 1483996 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:43:04.490955 1483996 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:43:05.965634 1483996 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:43:06.156578 1483996 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:43:06.733400 1483996 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:43:07.419034 1483996 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:43:07.876483 1483996 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:43:07.876871 1483996 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-878254 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:43:08.286200 1483996 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:43:08.286580 1483996 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-878254 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:43:09.455091 1483996 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:43:09.876162 1483996 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:43:10.533532 1483996 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:43:10.534233 1483996 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:43:11.695805 1483996 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:43:12.786600 1483996 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:43:13.060401 1483996 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:43:13.550342 1483996 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:43:13.551241 1483996 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:43:13.553544 1483996 out.go:204]   - Booting up control plane ...
	I1107 23:43:13.553654 1483996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:43:13.562377 1483996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:43:13.562460 1483996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:43:13.566388 1483996 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:43:13.566795 1483996 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:43:26.570697 1483996 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002569 seconds
	I1107 23:43:26.570835 1483996 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:43:26.585019 1483996 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:43:27.104413 1483996 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:43:27.104557 1483996 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-878254 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1107 23:43:27.615000 1483996 kubeadm.go:322] [bootstrap-token] Using token: 3s3yo8.zk1lehxqmhiuxjsu
	I1107 23:43:27.616954 1483996 out.go:204]   - Configuring RBAC rules ...
	I1107 23:43:27.617105 1483996 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:43:27.625024 1483996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:43:27.645745 1483996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:43:27.654894 1483996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:43:27.658783 1483996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:43:27.662531 1483996 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:43:27.672479 1483996 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:43:28.013335 1483996 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:43:28.116284 1483996 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:43:28.117661 1483996 kubeadm.go:322] 
	I1107 23:43:28.117728 1483996 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:43:28.117734 1483996 kubeadm.go:322] 
	I1107 23:43:28.117805 1483996 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:43:28.117810 1483996 kubeadm.go:322] 
	I1107 23:43:28.117834 1483996 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:43:28.117891 1483996 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:43:28.117939 1483996 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:43:28.117943 1483996 kubeadm.go:322] 
	I1107 23:43:28.118016 1483996 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:43:28.118088 1483996 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:43:28.118152 1483996 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:43:28.118156 1483996 kubeadm.go:322] 
	I1107 23:43:28.118235 1483996 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:43:28.118306 1483996 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:43:28.118311 1483996 kubeadm.go:322] 
	I1107 23:43:28.118389 1483996 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3s3yo8.zk1lehxqmhiuxjsu \
	I1107 23:43:28.118488 1483996 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 \
	I1107 23:43:28.118510 1483996 kubeadm.go:322]     --control-plane 
	I1107 23:43:28.118514 1483996 kubeadm.go:322] 
	I1107 23:43:28.118593 1483996 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:43:28.118598 1483996 kubeadm.go:322] 
	I1107 23:43:28.119189 1483996 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3s3yo8.zk1lehxqmhiuxjsu \
	I1107 23:43:28.119361 1483996 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 
	I1107 23:43:28.122346 1483996 kubeadm.go:322] W1107 23:43:03.951916    1222 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 23:43:28.122550 1483996 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:43:28.122646 1483996 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:43:28.122764 1483996 kubeadm.go:322] W1107 23:43:13.559677    1222 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:43:28.122878 1483996 kubeadm.go:322] W1107 23:43:13.561633    1222 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:43:28.122891 1483996 cni.go:84] Creating CNI manager for ""
	I1107 23:43:28.122898 1483996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:43:28.124649 1483996 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:43:28.126194 1483996 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:43:28.131166 1483996 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1107 23:43:28.131184 1483996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:43:28.156287 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:43:28.605286 1483996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:43:28.605437 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:28.605569 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=ingress-addon-legacy-878254 minikube.k8s.io/updated_at=2023_11_07T23_43_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:28.721586 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:28.721648 1483996 ops.go:34] apiserver oom_adj: -16
	I1107 23:43:28.853785 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:29.454146 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:29.954117 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:30.454477 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:30.953592 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:31.454388 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:31.953567 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:32.453565 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:32.953918 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:33.453784 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:33.953568 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:34.453550 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:34.954234 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:35.454143 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:35.953687 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:36.454281 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:36.954245 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:37.453965 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:37.953951 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:38.454567 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:38.953567 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:39.454042 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:39.953589 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:40.454329 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:40.954180 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:41.453761 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:41.954259 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:42.453675 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:42.954330 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:43.453795 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:43.954184 1483996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:43:44.068349 1483996 kubeadm.go:1081] duration metric: took 15.462964627s to wait for elevateKubeSystemPrivileges.
	I1107 23:43:44.068380 1483996 kubeadm.go:406] StartCluster complete in 40.248042084s
	I1107 23:43:44.068398 1483996 settings.go:142] acquiring lock: {Name:mk87503ca622eddfd1b600486068357de065638c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:44.068460 1483996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:43:44.069164 1483996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/kubeconfig: {Name:mk5ec442d2fb6aea8291322e188521db23ee465e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:43:44.070077 1483996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:43:44.069910 1483996 kapi.go:59] client config for ingress-addon-legacy-878254: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:43:44.070343 1483996 config.go:182] Loaded profile config "ingress-addon-legacy-878254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1107 23:43:44.070461 1483996 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:43:44.070537 1483996 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-878254"
	I1107 23:43:44.070552 1483996 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-878254"
	I1107 23:43:44.070585 1483996 host.go:66] Checking if "ingress-addon-legacy-878254" exists ...
	I1107 23:43:44.071058 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Status}}
	I1107 23:43:44.071367 1483996 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:43:44.071748 1483996 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-878254"
	I1107 23:43:44.071770 1483996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-878254"
	I1107 23:43:44.072048 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Status}}
	I1107 23:43:44.130436 1483996 kapi.go:59] client config for ingress-addon-legacy-878254: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:43:44.130702 1483996 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-878254"
	I1107 23:43:44.130737 1483996 host.go:66] Checking if "ingress-addon-legacy-878254" exists ...
	I1107 23:43:44.131193 1483996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-878254 --format={{.State.Status}}
	I1107 23:43:44.134948 1483996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:43:44.138731 1483996 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:43:44.138756 1483996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:43:44.138822 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:43:44.172575 1483996 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:43:44.172598 1483996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:43:44.172666 1483996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-878254
	I1107 23:43:44.201108 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:43:44.227557 1483996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34083 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/ingress-addon-legacy-878254/id_rsa Username:docker}
	I1107 23:43:44.262695 1483996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:43:44.301869 1483996 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-878254" context rescaled to 1 replicas
	I1107 23:43:44.301951 1483996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:43:44.308299 1483996 out.go:177] * Verifying Kubernetes components...
	I1107 23:43:44.310543 1483996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:43:44.399030 1483996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:43:44.471148 1483996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:43:44.848748 1483996 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1107 23:43:44.849492 1483996 kapi.go:59] client config for ingress-addon-legacy-878254: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:43:44.849849 1483996 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-878254" to be "Ready" ...
	I1107 23:43:45.118483 1483996 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 23:43:45.125039 1483996 addons.go:502] enable addons completed in 1.05456303s: enabled=[storage-provisioner default-storageclass]
	I1107 23:43:46.887274 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:43:49.387634 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:43:51.887603 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:43:54.386788 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:43:56.886530 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:43:58.887347 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:44:00.887428 1483996 node_ready.go:58] node "ingress-addon-legacy-878254" has status "Ready":"False"
	I1107 23:44:01.886755 1483996 node_ready.go:49] node "ingress-addon-legacy-878254" has status "Ready":"True"
	I1107 23:44:01.886784 1483996 node_ready.go:38] duration metric: took 17.036912751s waiting for node "ingress-addon-legacy-878254" to be "Ready" ...
	I1107 23:44:01.886797 1483996 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:44:01.893764 1483996 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xgz88" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:03.901917 1483996 pod_ready.go:102] pod "coredns-66bff467f8-xgz88" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-07 23:43:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1107 23:44:06.403874 1483996 pod_ready.go:102] pod "coredns-66bff467f8-xgz88" in "kube-system" namespace has status "Ready":"False"
	I1107 23:44:08.404785 1483996 pod_ready.go:102] pod "coredns-66bff467f8-xgz88" in "kube-system" namespace has status "Ready":"False"
	I1107 23:44:10.904227 1483996 pod_ready.go:92] pod "coredns-66bff467f8-xgz88" in "kube-system" namespace has status "Ready":"True"
	I1107 23:44:10.904254 1483996 pod_ready.go:81] duration metric: took 9.010461523s waiting for pod "coredns-66bff467f8-xgz88" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.904267 1483996 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.909136 1483996 pod_ready.go:92] pod "etcd-ingress-addon-legacy-878254" in "kube-system" namespace has status "Ready":"True"
	I1107 23:44:10.909160 1483996 pod_ready.go:81] duration metric: took 4.886169ms waiting for pod "etcd-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.909174 1483996 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.914185 1483996 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-878254" in "kube-system" namespace has status "Ready":"True"
	I1107 23:44:10.914209 1483996 pod_ready.go:81] duration metric: took 5.026976ms waiting for pod "kube-apiserver-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.914222 1483996 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.921905 1483996 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-878254" in "kube-system" namespace has status "Ready":"True"
	I1107 23:44:10.921933 1483996 pod_ready.go:81] duration metric: took 7.703122ms waiting for pod "kube-controller-manager-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.921953 1483996 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2l45" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.928338 1483996 pod_ready.go:92] pod "kube-proxy-c2l45" in "kube-system" namespace has status "Ready":"True"
	I1107 23:44:10.928365 1483996 pod_ready.go:81] duration metric: took 6.399756ms waiting for pod "kube-proxy-c2l45" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:10.928378 1483996 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:11.099655 1483996 request.go:629] Waited for 171.17896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-878254
	I1107 23:44:11.298975 1483996 request.go:629] Waited for 196.282718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-878254
	I1107 23:44:11.301890 1483996 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-878254" in "kube-system" namespace has status "Ready":"True"
	I1107 23:44:11.301915 1483996 pod_ready.go:81] duration metric: took 373.528162ms waiting for pod "kube-scheduler-ingress-addon-legacy-878254" in "kube-system" namespace to be "Ready" ...
	I1107 23:44:11.301928 1483996 pod_ready.go:38] duration metric: took 9.415120025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:44:11.301971 1483996 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:44:11.302066 1483996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:44:11.315562 1483996 api_server.go:72] duration metric: took 27.013540727s to wait for apiserver process to appear ...
	I1107 23:44:11.315588 1483996 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:44:11.315605 1483996 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1107 23:44:11.324354 1483996 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1107 23:44:11.326310 1483996 api_server.go:141] control plane version: v1.18.20
	I1107 23:44:11.326338 1483996 api_server.go:131] duration metric: took 10.742242ms to wait for apiserver health ...
	I1107 23:44:11.326348 1483996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:44:11.499737 1483996 request.go:629] Waited for 173.304013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:44:11.505926 1483996 system_pods.go:59] 8 kube-system pods found
	I1107 23:44:11.506032 1483996 system_pods.go:61] "coredns-66bff467f8-xgz88" [4b013499-282f-49df-95d7-00ef255bd5bc] Running
	I1107 23:44:11.506055 1483996 system_pods.go:61] "etcd-ingress-addon-legacy-878254" [62b430ff-95c4-4321-9731-be266a5bbf35] Running
	I1107 23:44:11.506075 1483996 system_pods.go:61] "kindnet-k4lcf" [e0dccbfc-a858-451c-bb52-069cf01ecf82] Running
	I1107 23:44:11.506099 1483996 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-878254" [3c2db4e9-c170-4a8d-99e0-46f1d4cb26d7] Running
	I1107 23:44:11.506123 1483996 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-878254" [10c8f21b-f733-4ece-979c-6643ad576f0e] Running
	I1107 23:44:11.506145 1483996 system_pods.go:61] "kube-proxy-c2l45" [c750611d-c754-4c20-b726-e97ea58a3341] Running
	I1107 23:44:11.506168 1483996 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-878254" [ea9c64f7-7b6f-4008-9646-04da6237fdc5] Running
	I1107 23:44:11.506201 1483996 system_pods.go:61] "storage-provisioner" [361d6a51-4436-4371-88a2-37f6ac9d47bb] Running
	I1107 23:44:11.506223 1483996 system_pods.go:74] duration metric: took 179.868501ms to wait for pod list to return data ...
	I1107 23:44:11.506243 1483996 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:44:11.699672 1483996 request.go:629] Waited for 193.34032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:44:11.702307 1483996 default_sa.go:45] found service account: "default"
	I1107 23:44:11.702334 1483996 default_sa.go:55] duration metric: took 196.069797ms for default service account to be created ...
	I1107 23:44:11.702347 1483996 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:44:11.899791 1483996 request.go:629] Waited for 197.380473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:44:11.906263 1483996 system_pods.go:86] 8 kube-system pods found
	I1107 23:44:11.906299 1483996 system_pods.go:89] "coredns-66bff467f8-xgz88" [4b013499-282f-49df-95d7-00ef255bd5bc] Running
	I1107 23:44:11.906307 1483996 system_pods.go:89] "etcd-ingress-addon-legacy-878254" [62b430ff-95c4-4321-9731-be266a5bbf35] Running
	I1107 23:44:11.906312 1483996 system_pods.go:89] "kindnet-k4lcf" [e0dccbfc-a858-451c-bb52-069cf01ecf82] Running
	I1107 23:44:11.906318 1483996 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-878254" [3c2db4e9-c170-4a8d-99e0-46f1d4cb26d7] Running
	I1107 23:44:11.906323 1483996 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-878254" [10c8f21b-f733-4ece-979c-6643ad576f0e] Running
	I1107 23:44:11.906328 1483996 system_pods.go:89] "kube-proxy-c2l45" [c750611d-c754-4c20-b726-e97ea58a3341] Running
	I1107 23:44:11.906333 1483996 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-878254" [ea9c64f7-7b6f-4008-9646-04da6237fdc5] Running
	I1107 23:44:11.906349 1483996 system_pods.go:89] "storage-provisioner" [361d6a51-4436-4371-88a2-37f6ac9d47bb] Running
	I1107 23:44:11.906365 1483996 system_pods.go:126] duration metric: took 204.006409ms to wait for k8s-apps to be running ...
	I1107 23:44:11.906377 1483996 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:44:11.906439 1483996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:44:11.920121 1483996 system_svc.go:56] duration metric: took 13.734018ms WaitForService to wait for kubelet.
	I1107 23:44:11.920159 1483996 kubeadm.go:581] duration metric: took 27.618137694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:44:11.920179 1483996 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:44:12.099647 1483996 request.go:629] Waited for 179.369782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1107 23:44:12.102837 1483996 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:44:12.102871 1483996 node_conditions.go:123] node cpu capacity is 2
	I1107 23:44:12.102882 1483996 node_conditions.go:105] duration metric: took 182.69749ms to run NodePressure ...
	I1107 23:44:12.102895 1483996 start.go:228] waiting for startup goroutines ...
	I1107 23:44:12.102901 1483996 start.go:233] waiting for cluster config update ...
	I1107 23:44:12.102912 1483996 start.go:242] writing updated cluster config ...
	I1107 23:44:12.103202 1483996 ssh_runner.go:195] Run: rm -f paused
	I1107 23:44:12.166214 1483996 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1107 23:44:12.168494 1483996 out.go:177] 
	W1107 23:44:12.170017 1483996 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1107 23:44:12.171766 1483996 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1107 23:44:12.173501 1483996 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-878254" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 07 23:47:15 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:15.536219420Z" level=info msg="Created container 825552756ab42da5d39ed8d09bb00cf97d741988f7e989e35a5dff7f29734189: default/hello-world-app-5f5d8b66bb-76wst/hello-world-app" id=8a40dc89-7669-4820-ba40-46cedde9c9ab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Nov 07 23:47:15 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:15.537325191Z" level=info msg="Starting container: 825552756ab42da5d39ed8d09bb00cf97d741988f7e989e35a5dff7f29734189" id=2b93ecbf-72f5-4081-a433-df49124c0bc9 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Nov 07 23:47:15 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:15.556800947Z" level=info msg="Started container" PID=3647 containerID=825552756ab42da5d39ed8d09bb00cf97d741988f7e989e35a5dff7f29734189 description=default/hello-world-app-5f5d8b66bb-76wst/hello-world-app id=2b93ecbf-72f5-4081-a433-df49124c0bc9 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=71babca27355323297e222777b00a85a67ed14d635184dcca58af62a500c117c
	Nov 07 23:47:15 ingress-addon-legacy-878254 conmon[3635]: conmon 825552756ab42da5d39e <ninfo>: container 3647 exited with status 1
	Nov 07 23:47:15 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:15.947306347Z" level=info msg="Removing container: 008c276d07bb0026b9a43a65dc5b99b84e36b969aa82b80ac0b32463e1919688" id=839b6930-33c5-41b0-b345-8934e939abad name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Nov 07 23:47:15 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:15.971565143Z" level=info msg="Removed container 008c276d07bb0026b9a43a65dc5b99b84e36b969aa82b80ac0b32463e1919688: default/hello-world-app-5f5d8b66bb-76wst/hello-world-app" id=839b6930-33c5-41b0-b345-8934e939abad name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Nov 07 23:47:16 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:16.441523690Z" level=info msg="Stopping container: 87c4ba77142bc09e3bcf4da9ff033e7e94e39f297a68a18fc19f8e4556af2ed9 (timeout: 2s)" id=34113f28-0b83-4bf5-b2ba-94dc15806831 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:47:16 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:16.452702975Z" level=info msg="Stopping container: 87c4ba77142bc09e3bcf4da9ff033e7e94e39f297a68a18fc19f8e4556af2ed9 (timeout: 2s)" id=9b296a95-e223-4f02-a243-ebcfa44f724e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:47:17 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:17.420330316Z" level=info msg="Stopping pod sandbox: 558a42a363d35a1e5a643e5370aa0dc33baabbeadaafd6dc7c33715fd58afa78" id=a33ec912-7e5a-43ca-bf4a-435a1089931c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:47:17 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:17.420373188Z" level=info msg="Stopped pod sandbox (already stopped): 558a42a363d35a1e5a643e5370aa0dc33baabbeadaafd6dc7c33715fd58afa78" id=a33ec912-7e5a-43ca-bf4a-435a1089931c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.461059819Z" level=warning msg="Stopping container 87c4ba77142bc09e3bcf4da9ff033e7e94e39f297a68a18fc19f8e4556af2ed9 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=34113f28-0b83-4bf5-b2ba-94dc15806831 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:47:18 ingress-addon-legacy-878254 conmon[2743]: conmon 87c4ba77142bc09e3bcf <ninfo>: container 2754 exited with status 137
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.645443320Z" level=info msg="Stopped container 87c4ba77142bc09e3bcf4da9ff033e7e94e39f297a68a18fc19f8e4556af2ed9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-lzrmj/controller" id=9b296a95-e223-4f02-a243-ebcfa44f724e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.647898938Z" level=info msg="Stopped container 87c4ba77142bc09e3bcf4da9ff033e7e94e39f297a68a18fc19f8e4556af2ed9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-lzrmj/controller" id=34113f28-0b83-4bf5-b2ba-94dc15806831 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.648118645Z" level=info msg="Stopping pod sandbox: b0e48195648e55de722ac5ab40411298e06c398baf9c8dad16c0d3dafef33b48" id=34e19f04-1263-45d0-ace9-8144c69e84ea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.651455191Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-EPQXIULIAYZV72GC - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-RIDLXXL3D5PXLAJK - [0:0]\n-X KUBE-HP-RIDLXXL3D5PXLAJK\n-X KUBE-HP-EPQXIULIAYZV72GC\nCOMMIT\n"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.656287896Z" level=info msg="Stopping pod sandbox: b0e48195648e55de722ac5ab40411298e06c398baf9c8dad16c0d3dafef33b48" id=b432f873-20f3-45b8-bf34-d978e607a537 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.657090006Z" level=info msg="Closing host port tcp:80"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.657135946Z" level=info msg="Closing host port tcp:443"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.658504189Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.658532144Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.658676380Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-lzrmj Namespace:ingress-nginx ID:b0e48195648e55de722ac5ab40411298e06c398baf9c8dad16c0d3dafef33b48 UID:760652ee-c008-46a0-919a-161be7afef51 NetNS:/var/run/netns/f19c2864-1253-43a6-80ab-e617fb8e98e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.658810459Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-lzrmj from CNI network \"kindnet\" (type=ptp)"
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.687843139Z" level=info msg="Stopped pod sandbox: b0e48195648e55de722ac5ab40411298e06c398baf9c8dad16c0d3dafef33b48" id=34e19f04-1263-45d0-ace9-8144c69e84ea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:47:18 ingress-addon-legacy-878254 crio[891]: time="2023-11-07 23:47:18.687960307Z" level=info msg="Stopped pod sandbox (already stopped): b0e48195648e55de722ac5ab40411298e06c398baf9c8dad16c0d3dafef33b48" id=b432f873-20f3-45b8-bf34-d978e607a537 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	825552756ab42       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   8 seconds ago       Exited              hello-world-app           2                   71babca273553       hello-world-app-5f5d8b66bb-76wst
	009712ebbf736       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                    2 minutes ago       Running             nginx                     0                   3a42d653f4e59       nginx
	87c4ba77142bc       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   b0e48195648e5       ingress-nginx-controller-7fcf777cb7-lzrmj
	8d604904d7355       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   488b844097fed       ingress-nginx-admission-patch-xvsd5
	d2a30a50e5b02       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   c53c90a88e3d8       ingress-nginx-admission-create-mcnpz
	d0288f5aa5a0b       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   f30f2a482f350       storage-provisioner
	b8b322b2d7827       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   fdff6b3ca130a       coredns-66bff467f8-xgz88
	608b282c6b4b0       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   a0306d3c3ddfc       kindnet-k4lcf
	8850dba41affd       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   28e027ad87990       kube-proxy-c2l45
	cf53ee1b57076       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   0ab66fb9366ae       kube-controller-manager-ingress-addon-legacy-878254
	4bc37d377a618       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   18bb805e58f29       kube-apiserver-ingress-addon-legacy-878254
	9ab3ba6d87235       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   05b03d7087b01       kube-scheduler-ingress-addon-legacy-878254
	e449cebc2f644       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   074e26c2586c1       etcd-ingress-addon-legacy-878254
	
	* 
	* ==> coredns [b8b322b2d7827b9a07fa9f80d85c0dfb3fec0204f09f06b5d9e05773e8f642f8] <==
	* [INFO] 10.244.0.5:44334 - 11533 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003968s
	[INFO] 10.244.0.5:44334 - 27538 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003899s
	[INFO] 10.244.0.5:44334 - 10125 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038637s
	[INFO] 10.244.0.5:44334 - 20247 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051659s
	[INFO] 10.244.0.5:44334 - 51709 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002100307s
	[INFO] 10.244.0.5:44334 - 52066 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000971758s
	[INFO] 10.244.0.5:44334 - 34087 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054793s
	[INFO] 10.244.0.5:55537 - 43765 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000097337s
	[INFO] 10.244.0.5:58012 - 13412 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031286s
	[INFO] 10.244.0.5:58012 - 42687 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058592s
	[INFO] 10.244.0.5:55537 - 34800 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054277s
	[INFO] 10.244.0.5:55537 - 1541 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005175s
	[INFO] 10.244.0.5:58012 - 43372 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040172s
	[INFO] 10.244.0.5:55537 - 55689 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037144s
	[INFO] 10.244.0.5:58012 - 45139 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046941s
	[INFO] 10.244.0.5:55537 - 55582 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043052s
	[INFO] 10.244.0.5:58012 - 20179 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042051s
	[INFO] 10.244.0.5:55537 - 43426 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063802s
	[INFO] 10.244.0.5:58012 - 40136 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037432s
	[INFO] 10.244.0.5:58012 - 30048 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00146654s
	[INFO] 10.244.0.5:55537 - 53836 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001380427s
	[INFO] 10.244.0.5:58012 - 64554 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001200236s
	[INFO] 10.244.0.5:55537 - 44851 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001976697s
	[INFO] 10.244.0.5:58012 - 29477 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000191498s
	[INFO] 10.244.0.5:55537 - 9988 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000657562s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-878254
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-878254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=ingress-addon-legacy-878254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_43_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:43:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-878254
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:47:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:45:01 +0000   Tue, 07 Nov 2023 23:43:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:45:01 +0000   Tue, 07 Nov 2023 23:43:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:45:01 +0000   Tue, 07 Nov 2023 23:43:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:45:01 +0000   Tue, 07 Nov 2023 23:44:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-878254
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 745b048ff44849a4af303cb758bf84fa
	  System UUID:                a71acd2a-a4f2-43d5-b5ed-1f6b5e5717d2
	  Boot ID:                    b7db73c9-0d39-49c2-bed0-71d8dac21d90
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-76wst                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-xgz88                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m40s
	  kube-system                 etcd-ingress-addon-legacy-878254                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kindnet-k4lcf                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m40s
	  kube-system                 kube-apiserver-ingress-addon-legacy-878254             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-878254    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-c2l45                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-scheduler-ingress-addon-legacy-878254             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m8s (x5 over 4m8s)  kubelet     Node ingress-addon-legacy-878254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x4 over 4m8s)  kubelet     Node ingress-addon-legacy-878254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x4 over 4m8s)  kubelet     Node ingress-addon-legacy-878254 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m53s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s                kubelet     Node ingress-addon-legacy-878254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s                kubelet     Node ingress-addon-legacy-878254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s                kubelet     Node ingress-addon-legacy-878254 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m39s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m23s                kubelet     Node ingress-addon-legacy-878254 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001043] FS-Cache: O-key=[8] '76d7c90000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=00000000bc8cf6fc
	[  +0.001025] FS-Cache: N-key=[8] '76d7c90000000000'
	[  +0.003396] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000002bce5c9d
	[  +0.001174] FS-Cache: O-key=[8] '76d7c90000000000'
	[  +0.000722] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=00000000fd31c591
	[  +0.001038] FS-Cache: N-key=[8] '76d7c90000000000'
	[  +2.872966] FS-Cache: Duplicate cookie detected
	[  +0.000737] FS-Cache: O-cookie c=0000004d [p=0000004b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000006496a620
	[  +0.001050] FS-Cache: O-key=[8] '75d7c90000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001036] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=0000000055832ffe
	[  +0.001096] FS-Cache: N-key=[8] '75d7c90000000000'
	[  +0.450474] FS-Cache: Duplicate cookie detected
	[  +0.000703] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000005bc02455
	[  +0.001040] FS-Cache: O-key=[8] '7bd7c90000000000'
	[  +0.000707] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=000000009aecb77a
	[  +0.001068] FS-Cache: N-key=[8] '7bd7c90000000000'
	
	* 
	* ==> etcd [e449cebc2f64449f74dd22fadcce4a069a34ccae0bb3fda5603ec7adc6894c44] <==
	* raft2023/11/07 23:43:19 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/07 23:43:19 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/07 23:43:19 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-07 23:43:19.832178 W | auth: simple token is not cryptographically signed
	2023-11-07 23:43:19.839518 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-07 23:43:19.843043 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-07 23:43:19.843210 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-07 23:43:19.843399 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-07 23:43:19.843543 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/07 23:43:19 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-07 23:43:19.843802 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/11/07 23:43:20 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/07 23:43:20 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/07 23:43:20 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/07 23:43:20 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/07 23:43:20 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-07 23:43:20.138187 I | etcdserver: published {Name:ingress-addon-legacy-878254 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-07 23:43:20.166034 I | embed: ready to serve client requests
	2023-11-07 23:43:20.192549 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-07 23:43:20.246058 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-07 23:43:20.282911 I | embed: ready to serve client requests
	2023-11-07 23:43:20.284281 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-07 23:43:20.470091 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-07 23:43:20.902099 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-07 23:43:20.910001 W | etcdserver: request "ID:8128024999073747204 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (432.091727ms) to execute
	
	* 
	* ==> kernel <==
	*  23:47:24 up  6:29,  0 users,  load average: 0.15, 0.96, 1.97
	Linux ingress-addon-legacy-878254 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [608b282c6b4b02fdecdc308b3ac32b63887877bcbfda4824accf874915aa4ef5] <==
	* I1107 23:45:17.983042       1 main.go:227] handling current node
	I1107 23:45:27.986185       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:45:27.986212       1 main.go:227] handling current node
	I1107 23:45:37.995138       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:45:37.995168       1 main.go:227] handling current node
	I1107 23:45:47.999687       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:45:47.999857       1 main.go:227] handling current node
	I1107 23:45:58.009363       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:45:58.009395       1 main.go:227] handling current node
	I1107 23:46:08.021668       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:46:08.021701       1 main.go:227] handling current node
	I1107 23:46:18.026247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:46:18.026476       1 main.go:227] handling current node
	I1107 23:46:28.041699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:46:28.041727       1 main.go:227] handling current node
	I1107 23:46:38.045308       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:46:38.045361       1 main.go:227] handling current node
	I1107 23:46:48.049653       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:46:48.049683       1 main.go:227] handling current node
	I1107 23:46:58.074842       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:46:58.074872       1 main.go:227] handling current node
	I1107 23:47:08.087088       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:47:08.087117       1 main.go:227] handling current node
	I1107 23:47:18.098297       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:47:18.098329       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4bc37d377a618ac537a44c5016bc79e2e1c0fccc8210f5d29f75342250c78467] <==
	* I1107 23:43:25.048478       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1107 23:43:25.254929       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1107 23:43:25.254967       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:43:25.288082       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:43:25.288212       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1107 23:43:25.293540       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:43:26.045827       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1107 23:43:26.045859       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 23:43:26.057751       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1107 23:43:26.062389       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:43:26.062513       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1107 23:43:26.473136       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:43:26.528466       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1107 23:43:26.643923       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1107 23:43:26.645029       1 controller.go:609] quota admission added evaluator for: endpoints
	I1107 23:43:26.648629       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:43:27.498498       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1107 23:43:27.984641       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1107 23:43:28.090529       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1107 23:43:31.362567       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:43:44.050860       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1107 23:43:44.260127       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1107 23:44:13.128009       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1107 23:44:38.511202       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1107 23:47:15.454254       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x400ac8f170), encoder:(*versioning.codec)(0x400ad6a000), buf:(*bytes.Buffer)(0x40060b2210)})
	
	* 
	* ==> kube-controller-manager [cf53ee1b5707655c9e2f05550d2dd44e5bb830f22b84bafd2bb0810bced61ee9] <==
	* I1107 23:43:44.358789       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c21893b6-fe10-402d-8df0-000a25030040", APIVersion:"apps/v1", ResourceVersion:"340", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-gz62h
	I1107 23:43:44.383705       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c21893b6-fe10-402d-8df0-000a25030040", APIVersion:"apps/v1", ResourceVersion:"340", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-xgz88
	I1107 23:43:44.448756       1 shared_informer.go:230] Caches are synced for disruption 
	I1107 23:43:44.448782       1 disruption.go:339] Sending events to api server.
	I1107 23:43:44.449072       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1107 23:43:44.483711       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:43:44.498315       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:43:44.498603       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1107 23:43:44.498710       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:43:44.498721       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 23:43:44.498774       1 shared_informer.go:230] Caches are synced for endpoint 
	I1107 23:43:44.615372       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"422f2a4f-29f2-4e62-9443-74548061c5fc", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1107 23:43:44.829170       1 request.go:621] Throttling request took 1.012781894s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I1107 23:43:44.962228       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c21893b6-fe10-402d-8df0-000a25030040", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-gz62h
	I1107 23:43:45.280452       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1107 23:43:45.280502       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:44:04.050753       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1107 23:44:13.104808       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0733ac6c-9066-4b21-8622-81a0e7f6c995", APIVersion:"apps/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1107 23:44:13.140328       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"ce0772ab-265c-455f-8433-513c2775ae99", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-lzrmj
	I1107 23:44:13.193403       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1726bf48-4083-4ad3-8882-826ba96f3bae", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-mcnpz
	I1107 23:44:13.309636       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6e2f7af3-6246-4214-b9a8-58bf60d1d510", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-xvsd5
	I1107 23:44:18.587461       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1726bf48-4083-4ad3-8882-826ba96f3bae", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:44:18.608645       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6e2f7af3-6246-4214-b9a8-58bf60d1d510", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:46:57.953158       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"bfafce8a-509b-4d35-85ac-1e5d1615c63e", APIVersion:"apps/v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1107 23:46:57.969937       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"3b1f5d1e-a7a7-4257-911b-9890b257c527", APIVersion:"apps/v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-76wst
	
	* 
	* ==> kube-proxy [8850dba41affd528a74ed94c819647815d8ab7b2db8b7e5b5986c0835f3d48dd] <==
	* W1107 23:43:45.139122       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1107 23:43:45.155214       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1107 23:43:45.155269       1 server_others.go:186] Using iptables Proxier.
	I1107 23:43:45.155689       1 server.go:583] Version: v1.18.20
	I1107 23:43:45.160511       1 config.go:315] Starting service config controller
	I1107 23:43:45.160547       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1107 23:43:45.161035       1 config.go:133] Starting endpoints config controller
	I1107 23:43:45.161055       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1107 23:43:45.263534       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1107 23:43:45.263748       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9ab3ba6d8723538c81a921c6f7cd1a2ed81012ecf5b64f7704a29510869add57] <==
	* I1107 23:43:25.255829       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:43:25.261092       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1107 23:43:25.266529       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:43:25.266630       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:43:25.266691       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1107 23:43:25.269073       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:43:25.273231       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:43:25.273344       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:43:25.273446       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:43:25.273528       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:43:25.273534       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:43:25.273618       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:43:25.273695       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:43:25.273751       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:43:25.273776       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:43:25.273851       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:43:25.276836       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:43:26.096109       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:43:26.154055       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:43:26.184760       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:43:26.199942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:43:26.279465       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 23:43:28.466830       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1107 23:43:44.916876       1 factory.go:503] pod: kube-system/coredns-66bff467f8-gz62h is already present in the active queue
	E1107 23:43:45.108114       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Nov 07 23:47:02 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:02.920961    1647 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 008c276d07bb0026b9a43a65dc5b99b84e36b969aa82b80ac0b32463e1919688
	Nov 07 23:47:02 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:02.921211    1647 pod_workers.go:191] Error syncing pod da76fcb8-616b-44da-b383-b90685a303c2 ("hello-world-app-5f5d8b66bb-76wst_default(da76fcb8-616b-44da-b383-b90685a303c2)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-76wst_default(da76fcb8-616b-44da-b383-b90685a303c2)"
	Nov 07 23:47:03 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:03.421010    1647 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 07 23:47:03 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:03.421062    1647 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 07 23:47:03 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:03.421109    1647 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 07 23:47:03 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:03.421143    1647 pod_workers.go:191] Error syncing pod c7c21fb7-a32f-4cc7-9d95-e3f041227628 ("kube-ingress-dns-minikube_kube-system(c7c21fb7-a32f-4cc7-9d95-e3f041227628)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 07 23:47:03 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:03.923706    1647 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 008c276d07bb0026b9a43a65dc5b99b84e36b969aa82b80ac0b32463e1919688
	Nov 07 23:47:03 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:03.923999    1647 pod_workers.go:191] Error syncing pod da76fcb8-616b-44da-b383-b90685a303c2 ("hello-world-app-5f5d8b66bb-76wst_default(da76fcb8-616b-44da-b383-b90685a303c2)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-76wst_default(da76fcb8-616b-44da-b383-b90685a303c2)"
	Nov 07 23:47:14 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:14.048345    1647 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-m7247" (UniqueName: "kubernetes.io/secret/c7c21fb7-a32f-4cc7-9d95-e3f041227628-minikube-ingress-dns-token-m7247") pod "c7c21fb7-a32f-4cc7-9d95-e3f041227628" (UID: "c7c21fb7-a32f-4cc7-9d95-e3f041227628")
	Nov 07 23:47:14 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:14.052862    1647 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7c21fb7-a32f-4cc7-9d95-e3f041227628-minikube-ingress-dns-token-m7247" (OuterVolumeSpecName: "minikube-ingress-dns-token-m7247") pod "c7c21fb7-a32f-4cc7-9d95-e3f041227628" (UID: "c7c21fb7-a32f-4cc7-9d95-e3f041227628"). InnerVolumeSpecName "minikube-ingress-dns-token-m7247". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:47:14 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:14.148715    1647 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-m7247" (UniqueName: "kubernetes.io/secret/c7c21fb7-a32f-4cc7-9d95-e3f041227628-minikube-ingress-dns-token-m7247") on node "ingress-addon-legacy-878254" DevicePath ""
	Nov 07 23:47:15 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:15.420250    1647 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 008c276d07bb0026b9a43a65dc5b99b84e36b969aa82b80ac0b32463e1919688
	Nov 07 23:47:15 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:15.945372    1647 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 008c276d07bb0026b9a43a65dc5b99b84e36b969aa82b80ac0b32463e1919688
	Nov 07 23:47:15 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:15.945612    1647 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 825552756ab42da5d39ed8d09bb00cf97d741988f7e989e35a5dff7f29734189
	Nov 07 23:47:15 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:15.945844    1647 pod_workers.go:191] Error syncing pod da76fcb8-616b-44da-b383-b90685a303c2 ("hello-world-app-5f5d8b66bb-76wst_default(da76fcb8-616b-44da-b383-b90685a303c2)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-76wst_default(da76fcb8-616b-44da-b383-b90685a303c2)"
	Nov 07 23:47:15 ingress-addon-legacy-878254 kubelet[1647]: W1107 23:47:15.947627    1647 pod_container_deletor.go:77] Container "558a42a363d35a1e5a643e5370aa0dc33baabbeadaafd6dc7c33715fd58afa78" not found in pod's containers
	Nov 07 23:47:16 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:16.446866    1647 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-lzrmj.17957c0e931249a1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-lzrmj", UID:"760652ee-c008-46a0-919a-161be7afef51", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-878254"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14ad1011a4a21a1, ext:228542372358, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14ad1011a4a21a1, ext:228542372358, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-lzrmj.17957c0e931249a1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:47:16 ingress-addon-legacy-878254 kubelet[1647]: E1107 23:47:16.459266    1647 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-lzrmj.17957c0e931249a1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-lzrmj", UID:"760652ee-c008-46a0-919a-161be7afef51", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-878254"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14ad1011a4a21a1, ext:228542372358, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14ad1011af22b77, ext:228553384924, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-lzrmj.17957c0e931249a1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:47:18 ingress-addon-legacy-878254 kubelet[1647]: W1107 23:47:18.954241    1647 pod_container_deletor.go:77] Container "b0e48195648e55de722ac5ab40411298e06c398baf9c8dad16c0d3dafef33b48" not found in pod's containers
	Nov 07 23:47:20 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:20.568366    1647 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/760652ee-c008-46a0-919a-161be7afef51-webhook-cert") pod "760652ee-c008-46a0-919a-161be7afef51" (UID: "760652ee-c008-46a0-919a-161be7afef51")
	Nov 07 23:47:20 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:20.568435    1647 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-2r9px" (UniqueName: "kubernetes.io/secret/760652ee-c008-46a0-919a-161be7afef51-ingress-nginx-token-2r9px") pod "760652ee-c008-46a0-919a-161be7afef51" (UID: "760652ee-c008-46a0-919a-161be7afef51")
	Nov 07 23:47:20 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:20.574763    1647 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760652ee-c008-46a0-919a-161be7afef51-ingress-nginx-token-2r9px" (OuterVolumeSpecName: "ingress-nginx-token-2r9px") pod "760652ee-c008-46a0-919a-161be7afef51" (UID: "760652ee-c008-46a0-919a-161be7afef51"). InnerVolumeSpecName "ingress-nginx-token-2r9px". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:47:20 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:20.576020    1647 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760652ee-c008-46a0-919a-161be7afef51-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "760652ee-c008-46a0-919a-161be7afef51" (UID: "760652ee-c008-46a0-919a-161be7afef51"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:47:20 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:20.668750    1647 reconciler.go:319] Volume detached for volume "ingress-nginx-token-2r9px" (UniqueName: "kubernetes.io/secret/760652ee-c008-46a0-919a-161be7afef51-ingress-nginx-token-2r9px") on node "ingress-addon-legacy-878254" DevicePath ""
	Nov 07 23:47:20 ingress-addon-legacy-878254 kubelet[1647]: I1107 23:47:20.668799    1647 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/760652ee-c008-46a0-919a-161be7afef51-webhook-cert") on node "ingress-addon-legacy-878254" DevicePath ""
	
	* 
	* ==> storage-provisioner [d0288f5aa5a0b5042a95480882005462ebd38185e1905025c6b49c570803834f] <==
	* I1107 23:44:07.277994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:44:07.293165       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:44:07.293233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:44:07.300280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:44:07.300701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-878254_22630493-8a8e-493c-99d4-6cce54654c9d!
	I1107 23:44:07.301758       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5a69a53-b963-4ffa-937e-b8bfd815cb48", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-878254_22630493-8a8e-493c-99d4-6cce54654c9d became leader
	I1107 23:44:07.401890       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-878254_22630493-8a8e-493c-99d4-6cce54654c9d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-878254 -n ingress-addon-legacy-878254
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-878254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-f95qf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-f95qf -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-f95qf -- sh -c "ping -c 1 192.168.58.1": exit status 1 (357.786767ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-f95qf): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-xprzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-xprzg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-xprzg -- sh -c "ping -c 1 192.168.58.1": exit status 1 (255.14058ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-xprzg): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-898977
helpers_test.go:235: (dbg) docker inspect multinode-898977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517",
	        "Created": "2023-11-07T23:53:51.805888304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1521003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:53:52.141689853Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/hostname",
	        "HostsPath": "/var/lib/docker/containers/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/hosts",
	        "LogPath": "/var/lib/docker/containers/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517-json.log",
	        "Name": "/multinode-898977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-898977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-898977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9651a38d9265a8522544af01143745151c0b6c279dd3fad28cd4c7dddbb053a0-init/diff:/var/lib/docker/overlay2/8e491d7cb3241f95e04087f3d63eb57f6d89d07f6c4a9f8c41570cc55f16b330/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9651a38d9265a8522544af01143745151c0b6c279dd3fad28cd4c7dddbb053a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9651a38d9265a8522544af01143745151c0b6c279dd3fad28cd4c7dddbb053a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9651a38d9265a8522544af01143745151c0b6c279dd3fad28cd4c7dddbb053a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-898977",
	                "Source": "/var/lib/docker/volumes/multinode-898977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-898977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-898977",
	                "name.minikube.sigs.k8s.io": "multinode-898977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91e00f86a51afe7d6914ff193f556a71a41c8063a2e3d834b89b438c6bfa92fb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34140"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/91e00f86a51a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-898977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8776ce48ea1a",
	                        "multinode-898977"
	                    ],
	                    "NetworkID": "ca275e1d612b95150af9d14fcfc09037a5435b7220e41c6cb5d2a3fa3c8f895e",
	                    "EndpointID": "c1df3a27e2e854835d2515d4ed78b35e84dc720cd35cdb4715202fb7b539c555",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-898977 -n multinode-898977
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-898977 logs -n 25: (1.608298927s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-626667                           | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-626667 ssh -- ls                    | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-624879                           | mount-start-1-624879 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-626667 ssh -- ls                    | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-626667                           | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	| start   | -p mount-start-2-626667                           | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	| ssh     | mount-start-2-626667 ssh -- ls                    | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-626667                           | mount-start-2-626667 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	| delete  | -p mount-start-1-624879                           | mount-start-1-624879 | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:53 UTC |
	| start   | -p multinode-898977                               | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:53 UTC | 07 Nov 23 23:55 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- apply -f                   | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- rollout                    | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- get pods -o                | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- get pods -o                | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | busybox-5bc68d56bd-f95qf --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | busybox-5bc68d56bd-xprzg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | busybox-5bc68d56bd-f95qf --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | busybox-5bc68d56bd-xprzg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | busybox-5bc68d56bd-f95qf -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | busybox-5bc68d56bd-xprzg -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- get pods -o                | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-f95qf                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | busybox-5bc68d56bd-f95qf -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC | 07 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-xprzg                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-898977 -- exec                       | multinode-898977     | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | busybox-5bc68d56bd-xprzg -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:53:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:53:46.316428 1520543 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:53:46.316653 1520543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:53:46.316665 1520543 out.go:309] Setting ErrFile to fd 2...
	I1107 23:53:46.316672 1520543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:53:46.316976 1520543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:53:46.317399 1520543 out.go:303] Setting JSON to false
	I1107 23:53:46.318457 1520543 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23776,"bootTime":1699377451,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:53:46.318536 1520543 start.go:138] virtualization:  
	I1107 23:53:46.320934 1520543 out.go:177] * [multinode-898977] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:53:46.323567 1520543 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:53:46.325208 1520543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:53:46.323840 1520543 notify.go:220] Checking for updates...
	I1107 23:53:46.328536 1520543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:53:46.330154 1520543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:53:46.331799 1520543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:53:46.333472 1520543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:53:46.335149 1520543 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:53:46.359466 1520543 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:53:46.359580 1520543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:53:46.439432 1520543 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-07 23:53:46.429511037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:53:46.439537 1520543 docker.go:295] overlay module found
	I1107 23:53:46.441316 1520543 out.go:177] * Using the docker driver based on user configuration
	I1107 23:53:46.443112 1520543 start.go:298] selected driver: docker
	I1107 23:53:46.443129 1520543 start.go:902] validating driver "docker" against <nil>
	I1107 23:53:46.443142 1520543 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:53:46.443760 1520543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:53:46.514310 1520543 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-07 23:53:46.50449951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:53:46.514476 1520543 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:53:46.514759 1520543 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:53:46.516530 1520543 out.go:177] * Using Docker driver with root privileges
	I1107 23:53:46.517906 1520543 cni.go:84] Creating CNI manager for ""
	I1107 23:53:46.517923 1520543 cni.go:136] 0 nodes found, recommending kindnet
	I1107 23:53:46.517934 1520543 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:53:46.517947 1520543 start_flags.go:323] config:
	{Name:multinode-898977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:53:46.520244 1520543 out.go:177] * Starting control plane node multinode-898977 in cluster multinode-898977
	I1107 23:53:46.521778 1520543 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:53:46.523381 1520543 out.go:177] * Pulling base image ...
	I1107 23:53:46.525005 1520543 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:53:46.525063 1520543 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1107 23:53:46.525074 1520543 cache.go:56] Caching tarball of preloaded images
	I1107 23:53:46.525103 1520543 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:53:46.525154 1520543 preload.go:174] Found /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1107 23:53:46.525176 1520543 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:53:46.525550 1520543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/config.json ...
	I1107 23:53:46.525580 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/config.json: {Name:mka42056427d95c1a0dce34c50886a242a3334e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:53:46.542679 1520543 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:53:46.542705 1520543 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:53:46.542725 1520543 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:53:46.542790 1520543 start.go:365] acquiring machines lock for multinode-898977: {Name:mk01da37f9373fb1f7a481ba9eea767da8010116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:53:46.542913 1520543 start.go:369] acquired machines lock for "multinode-898977" in 100.898µs
	I1107 23:53:46.542955 1520543 start.go:93] Provisioning new machine with config: &{Name:multinode-898977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:53:46.543044 1520543 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:53:46.545178 1520543 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 23:53:46.545439 1520543 start.go:159] libmachine.API.Create for "multinode-898977" (driver="docker")
	I1107 23:53:46.545493 1520543 client.go:168] LocalClient.Create starting
	I1107 23:53:46.545583 1520543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem
	I1107 23:53:46.545626 1520543 main.go:141] libmachine: Decoding PEM data...
	I1107 23:53:46.545649 1520543 main.go:141] libmachine: Parsing certificate...
	I1107 23:53:46.545706 1520543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem
	I1107 23:53:46.545728 1520543 main.go:141] libmachine: Decoding PEM data...
	I1107 23:53:46.545742 1520543 main.go:141] libmachine: Parsing certificate...
	I1107 23:53:46.546129 1520543 cli_runner.go:164] Run: docker network inspect multinode-898977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:53:46.563224 1520543 cli_runner.go:211] docker network inspect multinode-898977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:53:46.563306 1520543 network_create.go:281] running [docker network inspect multinode-898977] to gather additional debugging logs...
	I1107 23:53:46.563328 1520543 cli_runner.go:164] Run: docker network inspect multinode-898977
	W1107 23:53:46.581669 1520543 cli_runner.go:211] docker network inspect multinode-898977 returned with exit code 1
	I1107 23:53:46.581703 1520543 network_create.go:284] error running [docker network inspect multinode-898977]: docker network inspect multinode-898977: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-898977 not found
	I1107 23:53:46.581715 1520543 network_create.go:286] output of [docker network inspect multinode-898977]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-898977 not found
	
	** /stderr **
	I1107 23:53:46.581818 1520543 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:53:46.599510 1520543 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-45e1a0d37e35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:27:1e:3f:e9} reservation:<nil>}
	I1107 23:53:46.599886 1520543 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024ff1c0}
	I1107 23:53:46.599910 1520543 network_create.go:124] attempt to create docker network multinode-898977 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 23:53:46.599976 1520543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-898977 multinode-898977
	I1107 23:53:46.671680 1520543 network_create.go:108] docker network multinode-898977 192.168.58.0/24 created
	I1107 23:53:46.671712 1520543 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-898977" container
	I1107 23:53:46.671790 1520543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:53:46.688258 1520543 cli_runner.go:164] Run: docker volume create multinode-898977 --label name.minikube.sigs.k8s.io=multinode-898977 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:53:46.711890 1520543 oci.go:103] Successfully created a docker volume multinode-898977
	I1107 23:53:46.711975 1520543 cli_runner.go:164] Run: docker run --rm --name multinode-898977-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-898977 --entrypoint /usr/bin/test -v multinode-898977:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:53:47.306055 1520543 oci.go:107] Successfully prepared a docker volume multinode-898977
	I1107 23:53:47.306115 1520543 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:53:47.306136 1520543 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:53:47.306230 1520543 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-898977:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:53:51.720978 1520543 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-898977:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.414707803s)
	I1107 23:53:51.721009 1520543 kic.go:203] duration metric: took 4.414870 seconds to extract preloaded images to volume
	W1107 23:53:51.721162 1520543 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:53:51.721279 1520543 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:53:51.790103 1520543 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-898977 --name multinode-898977 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-898977 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-898977 --network multinode-898977 --ip 192.168.58.2 --volume multinode-898977:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:53:52.151565 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Running}}
	I1107 23:53:52.181774 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:53:52.207762 1520543 cli_runner.go:164] Run: docker exec multinode-898977 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:53:52.281506 1520543 oci.go:144] the created container "multinode-898977" has a running status.
	I1107 23:53:52.281541 1520543 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa...
	I1107 23:53:52.620273 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:53:52.620323 1520543 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:53:52.661496 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:53:52.691674 1520543 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:53:52.691693 1520543 kic_runner.go:114] Args: [docker exec --privileged multinode-898977 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:53:52.773605 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:53:52.798534 1520543 machine.go:88] provisioning docker machine ...
	I1107 23:53:52.798572 1520543 ubuntu.go:169] provisioning hostname "multinode-898977"
	I1107 23:53:52.798633 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:52.825298 1520543 main.go:141] libmachine: Using SSH client type: native
	I1107 23:53:52.825719 1520543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34143 <nil> <nil>}
	I1107 23:53:52.825733 1520543 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-898977 && echo "multinode-898977" | sudo tee /etc/hostname
	I1107 23:53:52.826597 1520543 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1107 23:53:55.969783 1520543 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-898977
	
	I1107 23:53:55.969878 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:55.989030 1520543 main.go:141] libmachine: Using SSH client type: native
	I1107 23:53:55.989461 1520543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34143 <nil> <nil>}
	I1107 23:53:55.989485 1520543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-898977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-898977/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-898977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:53:56.123518 1520543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:53:56.123556 1520543 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1107 23:53:56.123576 1520543 ubuntu.go:177] setting up certificates
	I1107 23:53:56.123585 1520543 provision.go:83] configureAuth start
	I1107 23:53:56.123649 1520543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977
	I1107 23:53:56.143135 1520543 provision.go:138] copyHostCerts
	I1107 23:53:56.143178 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1107 23:53:56.143211 1520543 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1107 23:53:56.143222 1520543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1107 23:53:56.143302 1520543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1107 23:53:56.143433 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1107 23:53:56.143463 1520543 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1107 23:53:56.143474 1520543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1107 23:53:56.143506 1520543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1107 23:53:56.143563 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1107 23:53:56.143583 1520543 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1107 23:53:56.143588 1520543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1107 23:53:56.143620 1520543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1107 23:53:56.143678 1520543 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.multinode-898977 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-898977]
	I1107 23:53:56.325106 1520543 provision.go:172] copyRemoteCerts
	I1107 23:53:56.325179 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:53:56.325225 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:56.343763 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:53:56.437168 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:53:56.437229 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 23:53:56.466635 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:53:56.466692 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1107 23:53:56.495495 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:53:56.495592 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:53:56.524868 1520543 provision.go:86] duration metric: configureAuth took 401.263694ms
	I1107 23:53:56.524899 1520543 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:53:56.525137 1520543 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:53:56.525249 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:56.544888 1520543 main.go:141] libmachine: Using SSH client type: native
	I1107 23:53:56.545313 1520543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34143 <nil> <nil>}
	I1107 23:53:56.545328 1520543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:53:56.788191 1520543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:53:56.788279 1520543 machine.go:91] provisioned docker machine in 3.989721229s
	I1107 23:53:56.788348 1520543 client.go:171] LocalClient.Create took 10.242842251s
	I1107 23:53:56.788391 1520543 start.go:167] duration metric: libmachine.API.Create for "multinode-898977" took 10.242953314s
	I1107 23:53:56.788414 1520543 start.go:300] post-start starting for "multinode-898977" (driver="docker")
	I1107 23:53:56.788464 1520543 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:53:56.788546 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:53:56.788620 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:56.807070 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:53:56.903055 1520543 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:53:56.907144 1520543 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1107 23:53:56.907167 1520543 command_runner.go:130] > NAME="Ubuntu"
	I1107 23:53:56.907174 1520543 command_runner.go:130] > VERSION_ID="22.04"
	I1107 23:53:56.907181 1520543 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1107 23:53:56.907187 1520543 command_runner.go:130] > VERSION_CODENAME=jammy
	I1107 23:53:56.907192 1520543 command_runner.go:130] > ID=ubuntu
	I1107 23:53:56.907196 1520543 command_runner.go:130] > ID_LIKE=debian
	I1107 23:53:56.907202 1520543 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 23:53:56.907208 1520543 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 23:53:56.907223 1520543 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 23:53:56.907236 1520543 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 23:53:56.907245 1520543 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1107 23:53:56.907288 1520543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:53:56.907315 1520543 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:53:56.907328 1520543 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:53:56.907340 1520543 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:53:56.907350 1520543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1107 23:53:56.907409 1520543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1107 23:53:56.907500 1520543 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1107 23:53:56.907511 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> /etc/ssl/certs/14550192.pem
	I1107 23:53:56.907610 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:53:56.917636 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1107 23:53:56.947353 1520543 start.go:303] post-start completed in 158.883759ms
	I1107 23:53:56.947790 1520543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977
	I1107 23:53:56.965310 1520543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/config.json ...
	I1107 23:53:56.965582 1520543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:53:56.965637 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:56.983749 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:53:57.076518 1520543 command_runner.go:130] > 17%!
	(MISSING)I1107 23:53:57.076650 1520543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:53:57.083578 1520543 command_runner.go:130] > 163G
	I1107 23:53:57.083619 1520543 start.go:128] duration metric: createHost completed in 10.540561781s
	I1107 23:53:57.083636 1520543 start.go:83] releasing machines lock for "multinode-898977", held for 10.540708619s
	I1107 23:53:57.083746 1520543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977
	I1107 23:53:57.103136 1520543 ssh_runner.go:195] Run: cat /version.json
	I1107 23:53:57.103169 1520543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:53:57.103196 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:57.103226 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:53:57.127041 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:53:57.128425 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:53:57.413764 1520543 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:53:57.413889 1520543 command_runner.go:130] > {"iso_version": "v1.32.0-1698920115-17545", "kicbase_version": "v0.0.42", "minikube_version": "v1.32.0", "commit": "adec9b238c91ffe56105b349a612d102f1601cd2"}
	I1107 23:53:57.414064 1520543 ssh_runner.go:195] Run: systemctl --version
	I1107 23:53:57.419279 1520543 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1107 23:53:57.419360 1520543 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1107 23:53:57.419697 1520543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:53:57.572427 1520543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:53:57.577995 1520543 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1107 23:53:57.578096 1520543 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1107 23:53:57.578120 1520543 command_runner.go:130] > Device: 3ah/58d	Inode: 5189945     Links: 1
	I1107 23:53:57.578152 1520543 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:53:57.578172 1520543 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:53:57.578185 1520543 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:53:57.578204 1520543 command_runner.go:130] > Change: 2023-11-07 23:30:04.327579928 +0000
	I1107 23:53:57.578215 1520543 command_runner.go:130] >  Birth: 2023-11-07 23:30:04.327579928 +0000
	I1107 23:53:57.578959 1520543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:53:57.603465 1520543 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:53:57.603573 1520543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:53:57.642009 1520543 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1107 23:53:57.642134 1520543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:53:57.642165 1520543 start.go:472] detecting cgroup driver to use...
	I1107 23:53:57.642200 1520543 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:53:57.642269 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:53:57.661035 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:53:57.675494 1520543 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:53:57.675597 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:53:57.691470 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:53:57.709058 1520543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:53:57.818137 1520543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:53:57.926475 1520543 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:53:57.926547 1520543 docker.go:219] disabling docker service ...
	I1107 23:53:57.926638 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:53:57.948338 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:53:57.962329 1520543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:53:58.064819 1520543 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:53:58.064926 1520543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:53:58.169429 1520543 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:53:58.169519 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:53:58.183732 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:53:58.202035 1520543 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:53:58.203217 1520543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:53:58.203299 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:53:58.216004 1520543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:53:58.216083 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:53:58.228752 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:53:58.240979 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:53:58.253358 1520543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:53:58.264689 1520543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:53:58.273841 1520543 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1107 23:53:58.275125 1520543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:53:58.285236 1520543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:53:58.387394 1520543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:53:58.511019 1520543 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:53:58.511103 1520543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:53:58.515728 1520543 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:53:58.515791 1520543 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:53:58.515813 1520543 command_runner.go:130] > Device: 44h/68d	Inode: 190         Links: 1
	I1107 23:53:58.515839 1520543 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:53:58.515872 1520543 command_runner.go:130] > Access: 2023-11-07 23:53:58.495123991 +0000
	I1107 23:53:58.515900 1520543 command_runner.go:130] > Modify: 2023-11-07 23:53:58.495123991 +0000
	I1107 23:53:58.515922 1520543 command_runner.go:130] > Change: 2023-11-07 23:53:58.495123991 +0000
	I1107 23:53:58.515941 1520543 command_runner.go:130] >  Birth: -
	I1107 23:53:58.516007 1520543 start.go:540] Will wait 60s for crictl version
	I1107 23:53:58.516086 1520543 ssh_runner.go:195] Run: which crictl
	I1107 23:53:58.520462 1520543 command_runner.go:130] > /usr/bin/crictl
	I1107 23:53:58.520879 1520543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:53:58.560744 1520543 command_runner.go:130] > Version:  0.1.0
	I1107 23:53:58.560766 1520543 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:53:58.560772 1520543 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1107 23:53:58.560778 1520543 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:53:58.560802 1520543 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:53:58.560873 1520543 ssh_runner.go:195] Run: crio --version
	I1107 23:53:58.606693 1520543 command_runner.go:130] > crio version 1.24.6
	I1107 23:53:58.606713 1520543 command_runner.go:130] > Version:          1.24.6
	I1107 23:53:58.606722 1520543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:53:58.606728 1520543 command_runner.go:130] > GitTreeState:     clean
	I1107 23:53:58.606735 1520543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:53:58.606741 1520543 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:53:58.606746 1520543 command_runner.go:130] > Compiler:         gc
	I1107 23:53:58.606752 1520543 command_runner.go:130] > Platform:         linux/arm64
	I1107 23:53:58.606761 1520543 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:53:58.606771 1520543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:53:58.606776 1520543 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:53:58.606781 1520543 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:53:58.609113 1520543 ssh_runner.go:195] Run: crio --version
	I1107 23:53:58.652800 1520543 command_runner.go:130] > crio version 1.24.6
	I1107 23:53:58.652874 1520543 command_runner.go:130] > Version:          1.24.6
	I1107 23:53:58.652898 1520543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:53:58.652918 1520543 command_runner.go:130] > GitTreeState:     clean
	I1107 23:53:58.652955 1520543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:53:58.652984 1520543 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:53:58.653005 1520543 command_runner.go:130] > Compiler:         gc
	I1107 23:53:58.653025 1520543 command_runner.go:130] > Platform:         linux/arm64
	I1107 23:53:58.653057 1520543 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:53:58.653095 1520543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:53:58.653113 1520543 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:53:58.653133 1520543 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:53:58.657232 1520543 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1107 23:53:58.658670 1520543 cli_runner.go:164] Run: docker network inspect multinode-898977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:53:58.679826 1520543 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1107 23:53:58.684313 1520543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:53:58.697533 1520543 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:53:58.697605 1520543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:53:58.768974 1520543 command_runner.go:130] > {
	I1107 23:53:58.768997 1520543 command_runner.go:130] >   "images": [
	I1107 23:53:58.769003 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769012 1520543 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1107 23:53:58.769026 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769037 1520543 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1107 23:53:58.769045 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769051 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769063 1520543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1107 23:53:58.769076 1520543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1107 23:53:58.769081 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769087 1520543 command_runner.go:130] >       "size": "60867618",
	I1107 23:53:58.769093 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.769099 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769107 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769112 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769116 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769120 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769131 1520543 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1107 23:53:58.769136 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769142 1520543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:53:58.769149 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769157 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769170 1520543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1107 23:53:58.769180 1520543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1107 23:53:58.769188 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769201 1520543 command_runner.go:130] >       "size": "29037500",
	I1107 23:53:58.769209 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.769214 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769219 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769225 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769232 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769236 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769247 1520543 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1107 23:53:58.769251 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769258 1520543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1107 23:53:58.769266 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769271 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769280 1520543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1107 23:53:58.769291 1520543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1107 23:53:58.769298 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769304 1520543 command_runner.go:130] >       "size": "51393451",
	I1107 23:53:58.769309 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.769315 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769322 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769327 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769334 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769339 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769346 1520543 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1107 23:53:58.769353 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769359 1520543 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1107 23:53:58.769364 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769371 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769380 1520543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1107 23:53:58.769388 1520543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1107 23:53:58.769401 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769408 1520543 command_runner.go:130] >       "size": "182203183",
	I1107 23:53:58.769413 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.769419 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.769424 1520543 command_runner.go:130] >       },
	I1107 23:53:58.769435 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769440 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769445 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769453 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769457 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769465 1520543 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1107 23:53:58.769470 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769480 1520543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1107 23:53:58.769486 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769491 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769503 1520543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1107 23:53:58.769513 1520543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1107 23:53:58.769519 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769524 1520543 command_runner.go:130] >       "size": "121054158",
	I1107 23:53:58.769529 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.769534 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.769543 1520543 command_runner.go:130] >       },
	I1107 23:53:58.769549 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769553 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769560 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769567 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769572 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769582 1520543 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1107 23:53:58.769587 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769593 1520543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1107 23:53:58.769600 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769605 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769615 1520543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1107 23:53:58.769627 1520543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1107 23:53:58.769631 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769639 1520543 command_runner.go:130] >       "size": "117252916",
	I1107 23:53:58.769646 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.769651 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.769658 1520543 command_runner.go:130] >       },
	I1107 23:53:58.769664 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769670 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769675 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769681 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769685 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769693 1520543 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1107 23:53:58.769702 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769708 1520543 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1107 23:53:58.769712 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769719 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769731 1520543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1107 23:53:58.769743 1520543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1107 23:53:58.769748 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769755 1520543 command_runner.go:130] >       "size": "69926807",
	I1107 23:53:58.769763 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.769768 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769773 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769778 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769788 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769792 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769800 1520543 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1107 23:53:58.769806 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769813 1520543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1107 23:53:58.769819 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769824 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.769865 1520543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1107 23:53:58.769877 1520543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1107 23:53:58.769882 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.769890 1520543 command_runner.go:130] >       "size": "59188020",
	I1107 23:53:58.769899 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.769904 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.769911 1520543 command_runner.go:130] >       },
	I1107 23:53:58.769916 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.769921 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.769926 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.769932 1520543 command_runner.go:130] >     },
	I1107 23:53:58.769938 1520543 command_runner.go:130] >     {
	I1107 23:53:58.769949 1520543 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1107 23:53:58.769956 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.769961 1520543 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:53:58.769966 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.770029 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.770045 1520543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1107 23:53:58.770054 1520543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1107 23:53:58.770059 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.770064 1520543 command_runner.go:130] >       "size": "520014",
	I1107 23:53:58.770072 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.770081 1520543 command_runner.go:130] >         "value": "65535"
	I1107 23:53:58.770086 1520543 command_runner.go:130] >       },
	I1107 23:53:58.770093 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.770098 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.770103 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.770110 1520543 command_runner.go:130] >     }
	I1107 23:53:58.770115 1520543 command_runner.go:130] >   ]
	I1107 23:53:58.770125 1520543 command_runner.go:130] > }
	I1107 23:53:58.773080 1520543 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:53:58.773105 1520543 crio.go:415] Images already preloaded, skipping extraction
	I1107 23:53:58.773164 1520543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:53:58.816496 1520543 command_runner.go:130] > {
	I1107 23:53:58.816519 1520543 command_runner.go:130] >   "images": [
	I1107 23:53:58.816524 1520543 command_runner.go:130] >     {
	I1107 23:53:58.816534 1520543 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1107 23:53:58.816539 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.816546 1520543 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1107 23:53:58.816553 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816559 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.816576 1520543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1107 23:53:58.816585 1520543 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1107 23:53:58.816593 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816598 1520543 command_runner.go:130] >       "size": "60867618",
	I1107 23:53:58.816603 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.816611 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.816617 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.816622 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.816629 1520543 command_runner.go:130] >     },
	I1107 23:53:58.816633 1520543 command_runner.go:130] >     {
	I1107 23:53:58.816642 1520543 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1107 23:53:58.816649 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.816656 1520543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:53:58.816660 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816666 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.816677 1520543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1107 23:53:58.816693 1520543 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1107 23:53:58.816698 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816706 1520543 command_runner.go:130] >       "size": "29037500",
	I1107 23:53:58.816710 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.816715 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.816720 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.816725 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.816729 1520543 command_runner.go:130] >     },
	I1107 23:53:58.816734 1520543 command_runner.go:130] >     {
	I1107 23:53:58.816741 1520543 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1107 23:53:58.816749 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.816755 1520543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1107 23:53:58.816762 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816767 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.816792 1520543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1107 23:53:58.816802 1520543 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1107 23:53:58.816810 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816815 1520543 command_runner.go:130] >       "size": "51393451",
	I1107 23:53:58.816821 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.816826 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.816831 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.816836 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.816844 1520543 command_runner.go:130] >     },
	I1107 23:53:58.816849 1520543 command_runner.go:130] >     {
	I1107 23:53:58.816858 1520543 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1107 23:53:58.816865 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.816872 1520543 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1107 23:53:58.816879 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816884 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.816892 1520543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1107 23:53:58.816903 1520543 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1107 23:53:58.816914 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.816919 1520543 command_runner.go:130] >       "size": "182203183",
	I1107 23:53:58.816924 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.816931 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.816938 1520543 command_runner.go:130] >       },
	I1107 23:53:58.816946 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.816953 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.816959 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.816966 1520543 command_runner.go:130] >     },
	I1107 23:53:58.816973 1520543 command_runner.go:130] >     {
	I1107 23:53:58.816980 1520543 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1107 23:53:58.816989 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.816995 1520543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1107 23:53:58.817001 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817007 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.817018 1520543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1107 23:53:58.817030 1520543 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1107 23:53:58.817034 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817039 1520543 command_runner.go:130] >       "size": "121054158",
	I1107 23:53:58.817046 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.817051 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.817058 1520543 command_runner.go:130] >       },
	I1107 23:53:58.817062 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.817069 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.817077 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.817082 1520543 command_runner.go:130] >     },
	I1107 23:53:58.817086 1520543 command_runner.go:130] >     {
	I1107 23:53:58.817093 1520543 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1107 23:53:58.817101 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.817107 1520543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1107 23:53:58.817114 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817119 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.817129 1520543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1107 23:53:58.817141 1520543 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1107 23:53:58.817146 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817155 1520543 command_runner.go:130] >       "size": "117252916",
	I1107 23:53:58.817159 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.817164 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.817169 1520543 command_runner.go:130] >       },
	I1107 23:53:58.817176 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.817183 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.817199 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.817207 1520543 command_runner.go:130] >     },
	I1107 23:53:58.817211 1520543 command_runner.go:130] >     {
	I1107 23:53:58.817219 1520543 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1107 23:53:58.817226 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.817232 1520543 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1107 23:53:58.817236 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817241 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.817250 1520543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1107 23:53:58.817261 1520543 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1107 23:53:58.817274 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817279 1520543 command_runner.go:130] >       "size": "69926807",
	I1107 23:53:58.817284 1520543 command_runner.go:130] >       "uid": null,
	I1107 23:53:58.817289 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.817296 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.817301 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.817305 1520543 command_runner.go:130] >     },
	I1107 23:53:58.817312 1520543 command_runner.go:130] >     {
	I1107 23:53:58.817322 1520543 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1107 23:53:58.817327 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.817333 1520543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1107 23:53:58.817338 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817346 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.817375 1520543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1107 23:53:58.817388 1520543 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1107 23:53:58.817393 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817403 1520543 command_runner.go:130] >       "size": "59188020",
	I1107 23:53:58.817407 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.817412 1520543 command_runner.go:130] >         "value": "0"
	I1107 23:53:58.817417 1520543 command_runner.go:130] >       },
	I1107 23:53:58.817422 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.817429 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.817435 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.817440 1520543 command_runner.go:130] >     },
	I1107 23:53:58.817446 1520543 command_runner.go:130] >     {
	I1107 23:53:58.817454 1520543 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1107 23:53:58.817461 1520543 command_runner.go:130] >       "repoTags": [
	I1107 23:53:58.817469 1520543 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:53:58.817473 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817481 1520543 command_runner.go:130] >       "repoDigests": [
	I1107 23:53:58.817489 1520543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1107 23:53:58.817498 1520543 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1107 23:53:58.817503 1520543 command_runner.go:130] >       ],
	I1107 23:53:58.817510 1520543 command_runner.go:130] >       "size": "520014",
	I1107 23:53:58.817515 1520543 command_runner.go:130] >       "uid": {
	I1107 23:53:58.817522 1520543 command_runner.go:130] >         "value": "65535"
	I1107 23:53:58.817527 1520543 command_runner.go:130] >       },
	I1107 23:53:58.817533 1520543 command_runner.go:130] >       "username": "",
	I1107 23:53:58.817538 1520543 command_runner.go:130] >       "spec": null,
	I1107 23:53:58.817545 1520543 command_runner.go:130] >       "pinned": false
	I1107 23:53:58.817549 1520543 command_runner.go:130] >     }
	I1107 23:53:58.817553 1520543 command_runner.go:130] >   ]
	I1107 23:53:58.817557 1520543 command_runner.go:130] > }
	I1107 23:53:58.817686 1520543 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:53:58.817698 1520543 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:53:58.817771 1520543 ssh_runner.go:195] Run: crio config
	I1107 23:53:58.868874 1520543 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:53:58.868901 1520543 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:53:58.868910 1520543 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:53:58.868914 1520543 command_runner.go:130] > #
	I1107 23:53:58.868922 1520543 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:53:58.868931 1520543 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:53:58.868939 1520543 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:53:58.868961 1520543 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:53:58.868973 1520543 command_runner.go:130] > # reload'.
	I1107 23:53:58.868981 1520543 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:53:58.868992 1520543 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:53:58.869000 1520543 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:53:58.869013 1520543 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:53:58.869018 1520543 command_runner.go:130] > [crio]
	I1107 23:53:58.869025 1520543 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:53:58.869034 1520543 command_runner.go:130] > # containers images, in this directory.
	I1107 23:53:58.869826 1520543 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1107 23:53:58.869844 1520543 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:53:58.870531 1520543 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1107 23:53:58.870550 1520543 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:53:58.870559 1520543 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:53:58.871220 1520543 command_runner.go:130] > # storage_driver = "vfs"
	I1107 23:53:58.871238 1520543 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:53:58.871246 1520543 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:53:58.871583 1520543 command_runner.go:130] > # storage_option = [
	I1107 23:53:58.871948 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.871966 1520543 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:53:58.871975 1520543 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:53:58.872636 1520543 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:53:58.872652 1520543 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:53:58.872681 1520543 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:53:58.872691 1520543 command_runner.go:130] > # always happen on a node reboot
	I1107 23:53:58.873356 1520543 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:53:58.873375 1520543 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:53:58.873383 1520543 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:53:58.873402 1520543 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:53:58.874121 1520543 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:53:58.874140 1520543 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:53:58.874150 1520543 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:53:58.874813 1520543 command_runner.go:130] > # internal_wipe = true
	I1107 23:53:58.874830 1520543 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:53:58.874838 1520543 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:53:58.874845 1520543 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:53:58.875636 1520543 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:53:58.875661 1520543 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:53:58.875667 1520543 command_runner.go:130] > [crio.api]
	I1107 23:53:58.875677 1520543 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:53:58.876418 1520543 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:53:58.876447 1520543 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:53:58.877115 1520543 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:53:58.877132 1520543 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:53:58.877140 1520543 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:53:58.877775 1520543 command_runner.go:130] > # stream_port = "0"
	I1107 23:53:58.877797 1520543 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:53:58.878463 1520543 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:53:58.878480 1520543 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:53:58.878972 1520543 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:53:58.878996 1520543 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:53:58.879013 1520543 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:53:58.879018 1520543 command_runner.go:130] > # minutes.
	I1107 23:53:58.879549 1520543 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:53:58.879566 1520543 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:53:58.879580 1520543 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:53:58.880133 1520543 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:53:58.880150 1520543 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:53:58.880158 1520543 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:53:58.880165 1520543 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:53:58.880702 1520543 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:53:58.880719 1520543 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:53:58.881452 1520543 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1107 23:53:58.881471 1520543 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:53:58.882203 1520543 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1107 23:53:58.882233 1520543 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:53:58.882245 1520543 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:53:58.882250 1520543 command_runner.go:130] > [crio.runtime]
	I1107 23:53:58.882259 1520543 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:53:58.882270 1520543 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:53:58.882275 1520543 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:53:58.882285 1520543 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:53:58.882663 1520543 command_runner.go:130] > # default_ulimits = [
	I1107 23:53:58.883034 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.883051 1520543 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:53:58.883775 1520543 command_runner.go:130] > # no_pivot = false
	I1107 23:53:58.883791 1520543 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:53:58.883800 1520543 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:53:58.884517 1520543 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:53:58.884534 1520543 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:53:58.884541 1520543 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:53:58.884550 1520543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:53:58.885123 1520543 command_runner.go:130] > # conmon = ""
	I1107 23:53:58.885137 1520543 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:53:58.885146 1520543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:53:58.885526 1520543 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:53:58.885541 1520543 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:53:58.885548 1520543 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:53:58.885557 1520543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:53:58.885938 1520543 command_runner.go:130] > # conmon_env = [
	I1107 23:53:58.886344 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.886372 1520543 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:53:58.886380 1520543 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:53:58.886387 1520543 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:53:58.886759 1520543 command_runner.go:130] > # default_env = [
	I1107 23:53:58.887144 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.887161 1520543 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:53:58.887971 1520543 command_runner.go:130] > # selinux = false
	I1107 23:53:58.887986 1520543 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:53:58.887997 1520543 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:53:58.888027 1520543 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:53:58.888592 1520543 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:53:58.888607 1520543 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:53:58.888615 1520543 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:53:58.888623 1520543 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:53:58.888630 1520543 command_runner.go:130] > # which might increase security.
	I1107 23:53:58.889365 1520543 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1107 23:53:58.889387 1520543 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:53:58.889395 1520543 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:53:58.889404 1520543 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:53:58.889415 1520543 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:53:58.889421 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:53:58.890143 1520543 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:53:58.890164 1520543 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:53:58.890184 1520543 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:53:58.890719 1520543 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:53:58.890737 1520543 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:53:58.890743 1520543 command_runner.go:130] > # irqbalance daemon.
	I1107 23:53:58.891448 1520543 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:53:58.891473 1520543 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:53:58.891481 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:53:58.892036 1520543 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:53:58.892051 1520543 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:53:58.892437 1520543 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:53:58.892452 1520543 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:53:58.893011 1520543 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:53:58.893028 1520543 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:53:58.893037 1520543 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:53:58.893042 1520543 command_runner.go:130] > # will be added.
	I1107 23:53:58.893400 1520543 command_runner.go:130] > # default_capabilities = [
	I1107 23:53:58.893852 1520543 command_runner.go:130] > # 	"CHOWN",
	I1107 23:53:58.894260 1520543 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:53:58.894639 1520543 command_runner.go:130] > # 	"FSETID",
	I1107 23:53:58.895024 1520543 command_runner.go:130] > # 	"FOWNER",
	I1107 23:53:58.895392 1520543 command_runner.go:130] > # 	"SETGID",
	I1107 23:53:58.895659 1520543 command_runner.go:130] > # 	"SETUID",
	I1107 23:53:58.895672 1520543 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:53:58.895678 1520543 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:53:58.895682 1520543 command_runner.go:130] > # 	"KILL",
	I1107 23:53:58.895695 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.895708 1520543 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1107 23:53:58.895717 1520543 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1107 23:53:58.896225 1520543 command_runner.go:130] > # add_inheritable_capabilities = true
	I1107 23:53:58.896241 1520543 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:53:58.896249 1520543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:53:58.896255 1520543 command_runner.go:130] > # default_sysctls = [
	I1107 23:53:58.896261 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.896267 1520543 command_runner.go:130] > # List of devices on the host that a
	I1107 23:53:58.896279 1520543 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:53:58.896285 1520543 command_runner.go:130] > # allowed_devices = [
	I1107 23:53:58.896485 1520543 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:53:58.896740 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.896753 1520543 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:53:58.896800 1520543 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:53:58.896812 1520543 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:53:58.896820 1520543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:53:58.896828 1520543 command_runner.go:130] > # additional_devices = [
	I1107 23:53:58.896837 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.896846 1520543 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:53:58.896851 1520543 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:53:58.897121 1520543 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:53:58.897135 1520543 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:53:58.897139 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.897147 1520543 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:53:58.897156 1520543 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:53:58.897168 1520543 command_runner.go:130] > # Defaults to false.
	I1107 23:53:58.897454 1520543 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:53:58.897470 1520543 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:53:58.897478 1520543 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:53:58.897484 1520543 command_runner.go:130] > # hooks_dir = [
	I1107 23:53:58.897492 1520543 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:53:58.897788 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.897803 1520543 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:53:58.897811 1520543 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:53:58.897822 1520543 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:53:58.897826 1520543 command_runner.go:130] > #
	I1107 23:53:58.897834 1520543 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:53:58.897845 1520543 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:53:58.897852 1520543 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:53:58.897860 1520543 command_runner.go:130] > #
	I1107 23:53:58.897868 1520543 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:53:58.897877 1520543 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:53:58.897888 1520543 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:53:58.897895 1520543 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:53:58.897901 1520543 command_runner.go:130] > #
	I1107 23:53:58.897907 1520543 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:53:58.897921 1520543 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:53:58.897929 1520543 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:53:58.898213 1520543 command_runner.go:130] > # pids_limit = 0
	I1107 23:53:58.898229 1520543 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:53:58.898237 1520543 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:53:58.898255 1520543 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:53:58.898266 1520543 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:53:58.898275 1520543 command_runner.go:130] > # log_size_max = -1
	I1107 23:53:58.898301 1520543 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:53:58.898310 1520543 command_runner.go:130] > # log_to_journald = false
	I1107 23:53:58.898318 1520543 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:53:58.898590 1520543 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:53:58.898603 1520543 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:53:58.898609 1520543 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:53:58.898616 1520543 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:53:58.898623 1520543 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:53:58.898634 1520543 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:53:58.899082 1520543 command_runner.go:130] > # read_only = false
	I1107 23:53:58.899098 1520543 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:53:58.899106 1520543 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:53:58.899112 1520543 command_runner.go:130] > # live configuration reload.
	I1107 23:53:58.899117 1520543 command_runner.go:130] > # log_level = "info"
	I1107 23:53:58.899127 1520543 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:53:58.899137 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:53:58.899142 1520543 command_runner.go:130] > # log_filter = ""
	I1107 23:53:58.899150 1520543 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:53:58.899162 1520543 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:53:58.899168 1520543 command_runner.go:130] > # separated by comma.
	I1107 23:53:58.899430 1520543 command_runner.go:130] > # uid_mappings = ""
	I1107 23:53:58.899445 1520543 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:53:58.899459 1520543 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:53:58.899469 1520543 command_runner.go:130] > # separated by comma.
	I1107 23:53:58.899474 1520543 command_runner.go:130] > # gid_mappings = ""
	I1107 23:53:58.899482 1520543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:53:58.899492 1520543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:53:58.899505 1520543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:53:58.899515 1520543 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:53:58.899523 1520543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:53:58.899530 1520543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:53:58.899538 1520543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:53:58.899801 1520543 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:53:58.899820 1520543 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:53:58.899828 1520543 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:53:58.899835 1520543 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:53:58.899847 1520543 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:53:58.899855 1520543 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:53:58.899862 1520543 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:53:58.899874 1520543 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:53:58.899892 1520543 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:53:58.899900 1520543 command_runner.go:130] > # drop_infra_ctr = true
	I1107 23:53:58.899914 1520543 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:53:58.899921 1520543 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:53:58.899934 1520543 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:53:58.900159 1520543 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:53:58.900176 1520543 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:53:58.900182 1520543 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:53:58.900461 1520543 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:53:58.900480 1520543 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:53:58.900486 1520543 command_runner.go:130] > # pinns_path = ""
	I1107 23:53:58.900494 1520543 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:53:58.900504 1520543 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:53:58.900520 1520543 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:53:58.900538 1520543 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:53:58.900547 1520543 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:53:58.900557 1520543 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:53:58.900572 1520543 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:53:58.900580 1520543 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:53:58.900593 1520543 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:53:58.900599 1520543 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:53:58.900890 1520543 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:53:58.900905 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.900913 1520543 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:53:58.900921 1520543 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:53:58.900932 1520543 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:53:58.900948 1520543 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:53:58.900956 1520543 command_runner.go:130] > #
	I1107 23:53:58.900962 1520543 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:53:58.900968 1520543 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:53:58.900973 1520543 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:53:58.900979 1520543 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:53:58.900987 1520543 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:53:58.900993 1520543 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:53:58.901000 1520543 command_runner.go:130] > # Where:
	I1107 23:53:58.901007 1520543 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:53:58.901016 1520543 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:53:58.901028 1520543 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:53:58.901036 1520543 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:53:58.901044 1520543 command_runner.go:130] > #   in $PATH.
	I1107 23:53:58.901051 1520543 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:53:58.901057 1520543 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:53:58.901065 1520543 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:53:58.901072 1520543 command_runner.go:130] > #   state.
	I1107 23:53:58.901085 1520543 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:53:58.901097 1520543 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:53:58.901105 1520543 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:53:58.901114 1520543 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:53:58.901125 1520543 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:53:58.901135 1520543 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:53:58.901141 1520543 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:53:58.901151 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:53:58.901160 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:53:58.901170 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:53:58.901177 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:53:58.901186 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:53:58.901197 1520543 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:53:58.901205 1520543 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:53:58.901217 1520543 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:53:58.901223 1520543 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:53:58.901229 1520543 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:53:58.901235 1520543 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1107 23:53:58.901242 1520543 command_runner.go:130] > runtime_type = "oci"
	I1107 23:53:58.901538 1520543 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:53:58.901554 1520543 command_runner.go:130] > runtime_config_path = ""
	I1107 23:53:58.901571 1520543 command_runner.go:130] > monitor_path = ""
	I1107 23:53:58.901576 1520543 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:53:58.901583 1520543 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:53:58.901622 1520543 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:53:58.901632 1520543 command_runner.go:130] > # running containers
	I1107 23:53:58.901637 1520543 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:53:58.901645 1520543 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:53:58.901655 1520543 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:53:58.901665 1520543 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:53:58.901673 1520543 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:53:58.901684 1520543 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:53:58.901690 1520543 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:53:58.901696 1520543 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:53:58.901702 1520543 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:53:58.901707 1520543 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:53:58.901715 1520543 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:53:58.901724 1520543 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:53:58.901732 1520543 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:53:58.901744 1520543 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:53:58.901757 1520543 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:53:58.901765 1520543 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:53:58.901778 1520543 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:53:58.901787 1520543 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:53:58.901795 1520543 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:53:58.901806 1520543 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:53:58.901810 1520543 command_runner.go:130] > # Example:
	I1107 23:53:58.901816 1520543 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:53:58.901826 1520543 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:53:58.901835 1520543 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:53:58.901842 1520543 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:53:58.901849 1520543 command_runner.go:130] > # cpuset = 0
	I1107 23:53:58.901854 1520543 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:53:58.901858 1520543 command_runner.go:130] > # Where:
	I1107 23:53:58.901864 1520543 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:53:58.901872 1520543 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:53:58.901879 1520543 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:53:58.901888 1520543 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:53:58.901898 1520543 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:53:58.901908 1520543 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:53:58.902174 1520543 command_runner.go:130] > # 
	I1107 23:53:58.902191 1520543 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:53:58.902196 1520543 command_runner.go:130] > #
	I1107 23:53:58.902206 1520543 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:53:58.902218 1520543 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:53:58.902226 1520543 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:53:58.902238 1520543 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:53:58.902250 1520543 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:53:58.902255 1520543 command_runner.go:130] > [crio.image]
	I1107 23:53:58.902262 1520543 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:53:58.902523 1520543 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:53:58.902539 1520543 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:53:58.902547 1520543 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:53:58.902553 1520543 command_runner.go:130] > # global_auth_file = ""
	I1107 23:53:58.902559 1520543 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:53:58.902569 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:53:58.902582 1520543 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:53:58.902595 1520543 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:53:58.902602 1520543 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:53:58.902611 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:53:58.902874 1520543 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:53:58.902888 1520543 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:53:58.902896 1520543 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:53:58.902904 1520543 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:53:58.902912 1520543 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:53:58.902920 1520543 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:53:58.902928 1520543 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:53:58.902939 1520543 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:53:58.902946 1520543 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:53:58.902957 1520543 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:53:58.902964 1520543 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:53:58.903223 1520543 command_runner.go:130] > # signature_policy = ""
	I1107 23:53:58.903239 1520543 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:53:58.903248 1520543 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:53:58.903253 1520543 command_runner.go:130] > # changing them here.
	I1107 23:53:58.903267 1520543 command_runner.go:130] > # insecure_registries = [
	I1107 23:53:58.903277 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.903285 1520543 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:53:58.903294 1520543 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:53:58.903562 1520543 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:53:58.903576 1520543 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:53:58.903832 1520543 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:53:58.903846 1520543 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:53:58.903851 1520543 command_runner.go:130] > # CNI plugins.
	I1107 23:53:58.903856 1520543 command_runner.go:130] > [crio.network]
	I1107 23:53:58.903863 1520543 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:53:58.903874 1520543 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:53:58.903879 1520543 command_runner.go:130] > # cni_default_network = ""
	I1107 23:53:58.903886 1520543 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:53:58.903896 1520543 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:53:58.903903 1520543 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:53:58.903911 1520543 command_runner.go:130] > # plugin_dirs = [
	I1107 23:53:58.904362 1520543 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:53:58.904374 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.904382 1520543 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:53:58.904386 1520543 command_runner.go:130] > [crio.metrics]
	I1107 23:53:58.904393 1520543 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:53:58.904398 1520543 command_runner.go:130] > # enable_metrics = false
	I1107 23:53:58.904411 1520543 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:53:58.904422 1520543 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:53:58.904430 1520543 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:53:58.904440 1520543 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:53:58.904450 1520543 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:53:58.904455 1520543 command_runner.go:130] > # metrics_collectors = [
	I1107 23:53:58.904710 1520543 command_runner.go:130] > # 	"operations",
	I1107 23:53:58.904730 1520543 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:53:58.904737 1520543 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:53:58.904742 1520543 command_runner.go:130] > # 	"operations_errors",
	I1107 23:53:58.904958 1520543 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:53:58.905232 1520543 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:53:58.905253 1520543 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:53:58.905259 1520543 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:53:58.905265 1520543 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:53:58.905271 1520543 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:53:58.905276 1520543 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:53:58.905282 1520543 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:53:58.905291 1520543 command_runner.go:130] > # 	"containers_oom",
	I1107 23:53:58.905296 1520543 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:53:58.905744 1520543 command_runner.go:130] > # 	"operations_total",
	I1107 23:53:58.905756 1520543 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:53:58.905763 1520543 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:53:58.905768 1520543 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:53:58.905773 1520543 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:53:58.905779 1520543 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:53:58.905792 1520543 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:53:58.905797 1520543 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:53:58.905803 1520543 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:53:58.905811 1520543 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:53:58.905815 1520543 command_runner.go:130] > # ]
	I1107 23:53:58.905823 1520543 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:53:58.905829 1520543 command_runner.go:130] > # metrics_port = 9090
	I1107 23:53:58.905835 1520543 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:53:58.905842 1520543 command_runner.go:130] > # metrics_socket = ""
	I1107 23:53:58.905849 1520543 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:53:58.905859 1520543 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:53:58.905866 1520543 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:53:58.905877 1520543 command_runner.go:130] > # certificate on any modification event.
	I1107 23:53:58.905881 1520543 command_runner.go:130] > # metrics_cert = ""
	I1107 23:53:58.905890 1520543 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:53:58.905904 1520543 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:53:58.905909 1520543 command_runner.go:130] > # metrics_key = ""
	I1107 23:53:58.905916 1520543 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:53:58.905920 1520543 command_runner.go:130] > [crio.tracing]
	I1107 23:53:58.905929 1520543 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:53:58.905935 1520543 command_runner.go:130] > # enable_tracing = false
	I1107 23:53:58.905944 1520543 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:53:58.905949 1520543 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:53:58.905961 1520543 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:53:58.905971 1520543 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:53:58.905989 1520543 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:53:58.905995 1520543 command_runner.go:130] > [crio.stats]
	I1107 23:53:58.906002 1520543 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:53:58.906009 1520543 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:53:58.906018 1520543 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:53:58.907742 1520543 command_runner.go:130] ! time="2023-11-07 23:53:58.864515247Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1107 23:53:58.907765 1520543 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:53:58.908109 1520543 cni.go:84] Creating CNI manager for ""
	I1107 23:53:58.908127 1520543 cni.go:136] 1 nodes found, recommending kindnet
	I1107 23:53:58.908158 1520543 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:53:58.908180 1520543 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-898977 NodeName:multinode-898977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:53:58.908319 1520543 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-898977"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:53:58.908392 1520543 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-898977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:53:58.908459 1520543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:53:58.917836 1520543 command_runner.go:130] > kubeadm
	I1107 23:53:58.917854 1520543 command_runner.go:130] > kubectl
	I1107 23:53:58.917859 1520543 command_runner.go:130] > kubelet
	I1107 23:53:58.919142 1520543 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:53:58.919215 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:53:58.929694 1520543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1107 23:53:58.951279 1520543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:53:58.973062 1520543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1107 23:53:58.994170 1520543 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:53:58.999341 1520543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:53:59.014979 1520543 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977 for IP: 192.168.58.2
	I1107 23:53:59.015018 1520543 certs.go:190] acquiring lock for shared ca certs: {Name:mk4f8465cbc85ba57ebf3be6025d59928913c61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:53:59.015185 1520543 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key
	I1107 23:53:59.015241 1520543 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key
	I1107 23:53:59.015292 1520543 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key
	I1107 23:53:59.015308 1520543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt with IP's: []
	I1107 23:53:59.422883 1520543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt ...
	I1107 23:53:59.422916 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt: {Name:mkadee703a9a547aeab84ce2f37fb9d18d2e71e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:53:59.423115 1520543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key ...
	I1107 23:53:59.423131 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key: {Name:mk066c7e479f19c12a3febf4fc87605d2c6e0840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:53:59.423229 1520543 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key.cee25041
	I1107 23:53:59.423244 1520543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:54:00.130841 1520543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt.cee25041 ...
	I1107 23:54:00.132499 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt.cee25041: {Name:mk37688b288ea2ad11fc6d0d83368dbb8e5c364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:54:00.132927 1520543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key.cee25041 ...
	I1107 23:54:00.161555 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key.cee25041: {Name:mk0711685667c9588f8105e4d1ca8c9010f80c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:54:00.161777 1520543 certs.go:337] copying /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt
	I1107 23:54:00.161879 1520543 certs.go:341] copying /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key
	I1107 23:54:00.161936 1520543 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.key
	I1107 23:54:00.161949 1520543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.crt with IP's: []
	I1107 23:54:00.744825 1520543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.crt ...
	I1107 23:54:00.744858 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.crt: {Name:mk72a754e002044f7d315d1ab646868375c53d36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:54:00.745068 1520543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.key ...
	I1107 23:54:00.745088 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.key: {Name:mk5f7d9f95ab6664e87c3c9d6db667e0256991f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:54:00.745197 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:54:00.745221 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:54:00.745234 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:54:00.745249 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:54:00.745261 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:54:00.745276 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:54:00.745291 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:54:00.745303 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:54:00.745368 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem (1338 bytes)
	W1107 23:54:00.745409 1520543 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019_empty.pem, impossibly tiny 0 bytes
	I1107 23:54:00.745423 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:54:00.745450 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:54:00.745486 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:54:00.745516 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem (1675 bytes)
	I1107 23:54:00.745567 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem (1708 bytes)
	I1107 23:54:00.745600 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> /usr/share/ca-certificates/14550192.pem
	I1107 23:54:00.745617 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:54:00.745628 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem -> /usr/share/ca-certificates/1455019.pem
	I1107 23:54:00.746293 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:54:00.775481 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:54:00.805756 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:54:00.834955 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:54:00.863375 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:54:00.891314 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:54:00.919938 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:54:00.950230 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:54:00.978805 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /usr/share/ca-certificates/14550192.pem (1708 bytes)
	I1107 23:54:01.008917 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:54:01.037747 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem --> /usr/share/ca-certificates/1455019.pem (1338 bytes)
	I1107 23:54:01.066710 1520543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:54:01.088355 1520543 ssh_runner.go:195] Run: openssl version
	I1107 23:54:01.095112 1520543 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1107 23:54:01.095566 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14550192.pem && ln -fs /usr/share/ca-certificates/14550192.pem /etc/ssl/certs/14550192.pem"
	I1107 23:54:01.107603 1520543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14550192.pem
	I1107 23:54:01.112537 1520543 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:38 /usr/share/ca-certificates/14550192.pem
	I1107 23:54:01.112572 1520543 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:38 /usr/share/ca-certificates/14550192.pem
	I1107 23:54:01.112637 1520543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14550192.pem
	I1107 23:54:01.121027 1520543 command_runner.go:130] > 3ec20f2e
	I1107 23:54:01.121470 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14550192.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:54:01.134238 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:54:01.146705 1520543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:54:01.151512 1520543 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:30 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:54:01.151545 1520543 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:30 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:54:01.151596 1520543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:54:01.160572 1520543 command_runner.go:130] > b5213941
	I1107 23:54:01.161014 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:54:01.173544 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455019.pem && ln -fs /usr/share/ca-certificates/1455019.pem /etc/ssl/certs/1455019.pem"
	I1107 23:54:01.185681 1520543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455019.pem
	I1107 23:54:01.190615 1520543 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:38 /usr/share/ca-certificates/1455019.pem
	I1107 23:54:01.190663 1520543 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:38 /usr/share/ca-certificates/1455019.pem
	I1107 23:54:01.190714 1520543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455019.pem
	I1107 23:54:01.199422 1520543 command_runner.go:130] > 51391683
	I1107 23:54:01.199814 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455019.pem /etc/ssl/certs/51391683.0"
	I1107 23:54:01.212428 1520543 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:54:01.217307 1520543 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:54:01.217399 1520543 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:54:01.217465 1520543 kubeadm.go:404] StartCluster: {Name:multinode-898977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:54:01.217550 1520543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:54:01.217611 1520543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:54:01.260994 1520543 cri.go:89] found id: ""
	I1107 23:54:01.261075 1520543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:54:01.270880 1520543 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1107 23:54:01.270905 1520543 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1107 23:54:01.270914 1520543 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1107 23:54:01.272140 1520543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:54:01.283393 1520543 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:54:01.283461 1520543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:54:01.294401 1520543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1107 23:54:01.294427 1520543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1107 23:54:01.294437 1520543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1107 23:54:01.294448 1520543 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:54:01.294474 1520543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:54:01.294507 1520543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:54:01.349140 1520543 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:54:01.349168 1520543 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1107 23:54:01.349401 1520543 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:54:01.349418 1520543 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:54:01.400417 1520543 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:54:01.400447 1520543 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:54:01.400500 1520543 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:54:01.400514 1520543 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:54:01.400546 1520543 kubeadm.go:322] OS: Linux
	I1107 23:54:01.400555 1520543 command_runner.go:130] > OS: Linux
	I1107 23:54:01.400597 1520543 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:54:01.400606 1520543 command_runner.go:130] > CGROUPS_CPU: enabled
	I1107 23:54:01.400652 1520543 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:54:01.400665 1520543 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1107 23:54:01.400709 1520543 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:54:01.400718 1520543 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1107 23:54:01.400762 1520543 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:54:01.400771 1520543 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1107 23:54:01.400840 1520543 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:54:01.400852 1520543 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1107 23:54:01.400897 1520543 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:54:01.400906 1520543 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1107 23:54:01.400948 1520543 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1107 23:54:01.400957 1520543 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1107 23:54:01.401001 1520543 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1107 23:54:01.401011 1520543 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1107 23:54:01.401053 1520543 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1107 23:54:01.401062 1520543 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1107 23:54:01.484780 1520543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:54:01.484815 1520543 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:54:01.484905 1520543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:54:01.484917 1520543 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:54:01.485004 1520543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:54:01.485013 1520543 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:54:01.738370 1520543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:54:01.740197 1520543 out.go:204]   - Generating certificates and keys ...
	I1107 23:54:01.738514 1520543 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:54:01.740308 1520543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:54:01.740323 1520543 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1107 23:54:01.740455 1520543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:54:01.740476 1520543 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1107 23:54:02.122336 1520543 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:54:02.122368 1520543 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:54:02.557391 1520543 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:54:02.557467 1520543 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:54:03.167942 1520543 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:54:03.168013 1520543 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1107 23:54:04.528050 1520543 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:54:04.528080 1520543 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1107 23:54:04.758410 1520543 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:54:04.758437 1520543 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1107 23:54:04.758717 1520543 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-898977] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:54:04.758740 1520543 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-898977] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:54:04.978991 1520543 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:54:04.979022 1520543 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1107 23:54:04.979389 1520543 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-898977] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:54:04.979402 1520543 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-898977] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:54:05.421238 1520543 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:54:05.421264 1520543 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:54:05.867516 1520543 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:54:05.867559 1520543 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:54:06.334568 1520543 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:54:06.334599 1520543 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1107 23:54:06.334886 1520543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:54:06.334905 1520543 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:54:06.511314 1520543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:54:06.511345 1520543 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:54:07.127435 1520543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:54:07.127466 1520543 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:54:08.190814 1520543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:54:08.190841 1520543 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:54:08.622003 1520543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:54:08.622029 1520543 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:54:08.622770 1520543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:54:08.622794 1520543 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:54:08.625830 1520543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:54:08.627851 1520543 out.go:204]   - Booting up control plane ...
	I1107 23:54:08.625964 1520543 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:54:08.627959 1520543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:54:08.627972 1520543 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:54:08.628082 1520543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:54:08.628088 1520543 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:54:08.628741 1520543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:54:08.628763 1520543 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:54:08.640215 1520543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:54:08.640240 1520543 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:54:08.641072 1520543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:54:08.641090 1520543 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:54:08.641145 1520543 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:54:08.641154 1520543 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:54:08.744852 1520543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:54:08.744884 1520543 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:54:16.747993 1520543 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002480 seconds
	I1107 23:54:16.748018 1520543 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002480 seconds
	I1107 23:54:16.748118 1520543 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:54:16.748123 1520543 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:54:16.760825 1520543 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:54:16.760850 1520543 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:54:17.286511 1520543 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:54:17.286537 1520543 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:54:17.286708 1520543 kubeadm.go:322] [mark-control-plane] Marking the node multinode-898977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:54:17.286715 1520543 command_runner.go:130] > [mark-control-plane] Marking the node multinode-898977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:54:17.797662 1520543 kubeadm.go:322] [bootstrap-token] Using token: 2ee3d4.rq5355w5q0fj7sx2
	I1107 23:54:17.799472 1520543 out.go:204]   - Configuring RBAC rules ...
	I1107 23:54:17.797762 1520543 command_runner.go:130] > [bootstrap-token] Using token: 2ee3d4.rq5355w5q0fj7sx2
	I1107 23:54:17.799601 1520543 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:54:17.799613 1520543 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:54:17.805026 1520543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:54:17.805053 1520543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:54:17.813213 1520543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:54:17.813240 1520543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:54:17.816955 1520543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:54:17.816987 1520543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:54:17.822353 1520543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:54:17.822381 1520543 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:54:17.827625 1520543 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:54:17.827651 1520543 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:54:17.848310 1520543 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:54:17.848345 1520543 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:54:18.134389 1520543 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:54:18.134423 1520543 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1107 23:54:18.213609 1520543 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:54:18.213637 1520543 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1107 23:54:18.213644 1520543 kubeadm.go:322] 
	I1107 23:54:18.213700 1520543 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:54:18.213710 1520543 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1107 23:54:18.213714 1520543 kubeadm.go:322] 
	I1107 23:54:18.213786 1520543 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:54:18.213795 1520543 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1107 23:54:18.213799 1520543 kubeadm.go:322] 
	I1107 23:54:18.213823 1520543 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:54:18.213832 1520543 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1107 23:54:18.213886 1520543 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:54:18.213895 1520543 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:54:18.213941 1520543 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:54:18.213950 1520543 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:54:18.213955 1520543 kubeadm.go:322] 
	I1107 23:54:18.214027 1520543 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:54:18.214037 1520543 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1107 23:54:18.214041 1520543 kubeadm.go:322] 
	I1107 23:54:18.214086 1520543 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:54:18.214096 1520543 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:54:18.214102 1520543 kubeadm.go:322] 
	I1107 23:54:18.214157 1520543 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:54:18.214165 1520543 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1107 23:54:18.214234 1520543 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:54:18.214239 1520543 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:54:18.214301 1520543 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:54:18.214306 1520543 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:54:18.214310 1520543 kubeadm.go:322] 
	I1107 23:54:18.214388 1520543 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:54:18.214392 1520543 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:54:18.214463 1520543 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:54:18.214468 1520543 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1107 23:54:18.214472 1520543 kubeadm.go:322] 
	I1107 23:54:18.214553 1520543 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2ee3d4.rq5355w5q0fj7sx2 \
	I1107 23:54:18.214558 1520543 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2ee3d4.rq5355w5q0fj7sx2 \
	I1107 23:54:18.214654 1520543 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 \
	I1107 23:54:18.214658 1520543 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 \
	I1107 23:54:18.214677 1520543 kubeadm.go:322] 	--control-plane 
	I1107 23:54:18.214681 1520543 command_runner.go:130] > 	--control-plane 
	I1107 23:54:18.214685 1520543 kubeadm.go:322] 
	I1107 23:54:18.214764 1520543 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:54:18.214769 1520543 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:54:18.214773 1520543 kubeadm.go:322] 
	I1107 23:54:18.214850 1520543 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2ee3d4.rq5355w5q0fj7sx2 \
	I1107 23:54:18.214854 1520543 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2ee3d4.rq5355w5q0fj7sx2 \
	I1107 23:54:18.214954 1520543 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 
	I1107 23:54:18.214961 1520543 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 
	I1107 23:54:18.217149 1520543 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:54:18.217172 1520543 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:54:18.217270 1520543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:54:18.217277 1520543 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:54:18.217289 1520543 cni.go:84] Creating CNI manager for ""
	I1107 23:54:18.217295 1520543 cni.go:136] 1 nodes found, recommending kindnet
	I1107 23:54:18.219491 1520543 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:54:18.221214 1520543 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:54:18.237453 1520543 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:54:18.237477 1520543 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1107 23:54:18.237485 1520543 command_runner.go:130] > Device: 3ah/58d	Inode: 5193642     Links: 1
	I1107 23:54:18.237493 1520543 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:54:18.237500 1520543 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1107 23:54:18.237506 1520543 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1107 23:54:18.237513 1520543 command_runner.go:130] > Change: 2023-11-07 23:30:05.023574228 +0000
	I1107 23:54:18.237519 1520543 command_runner.go:130] >  Birth: 2023-11-07 23:30:04.971574654 +0000
	I1107 23:54:18.238533 1520543 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:54:18.238550 1520543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:54:18.276214 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:54:19.225731 1520543 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1107 23:54:19.232791 1520543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1107 23:54:19.245393 1520543 command_runner.go:130] > serviceaccount/kindnet created
	I1107 23:54:19.261126 1520543 command_runner.go:130] > daemonset.apps/kindnet created
	I1107 23:54:19.267197 1520543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:54:19.267361 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=multinode-898977 minikube.k8s.io/updated_at=2023_11_07T23_54_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:19.267368 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:19.292334 1520543 command_runner.go:130] > -16
	I1107 23:54:19.292454 1520543 ops.go:34] apiserver oom_adj: -16
	I1107 23:54:19.388224 1520543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1107 23:54:19.388358 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:19.469487 1520543 command_runner.go:130] > node/multinode-898977 labeled
	I1107 23:54:19.533612 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:19.537353 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:19.633548 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:20.138222 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:20.235985 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:20.638395 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:20.733211 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:21.137787 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:21.242539 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:21.638248 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:21.725804 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:22.138652 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:22.233126 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:22.637714 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:22.730369 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:23.137755 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:23.242708 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:23.638053 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:23.725972 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:24.138657 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:24.231708 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:24.638438 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:24.739827 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:25.137875 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:25.228996 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:25.638618 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:25.732767 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:26.138353 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:26.237020 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:26.637797 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:26.732421 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:27.137776 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:27.244205 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:27.637650 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:27.728450 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:28.137997 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:28.235225 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:28.637817 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:28.737875 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:29.138470 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:29.237931 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:29.637725 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:29.730368 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:30.137965 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:30.244280 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:30.638317 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:30.729805 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:31.138025 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:31.304150 1520543 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:54:31.637992 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:54:31.732943 1520543 command_runner.go:130] > NAME      SECRETS   AGE
	I1107 23:54:31.732962 1520543 command_runner.go:130] > default   0         0s
	I1107 23:54:31.736950 1520543 kubeadm.go:1081] duration metric: took 12.469670159s to wait for elevateKubeSystemPrivileges.
	I1107 23:54:31.736984 1520543 kubeadm.go:406] StartCluster complete in 30.51952256s
	I1107 23:54:31.737002 1520543 settings.go:142] acquiring lock: {Name:mk87503ca622eddfd1b600486068357de065638c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:54:31.737067 1520543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:54:31.737735 1520543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/kubeconfig: {Name:mk5ec442d2fb6aea8291322e188521db23ee465e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:54:31.738305 1520543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:54:31.738614 1520543 kapi.go:59] client config for multinode-898977: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:54:31.739800 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:54:31.739820 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:31.739830 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:31.739837 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:31.740266 1520543 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:54:31.740327 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:54:31.740425 1520543 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:54:31.740504 1520543 addons.go:69] Setting storage-provisioner=true in profile "multinode-898977"
	I1107 23:54:31.740519 1520543 addons.go:231] Setting addon storage-provisioner=true in "multinode-898977"
	I1107 23:54:31.740560 1520543 host.go:66] Checking if "multinode-898977" exists ...
	I1107 23:54:31.741046 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:54:31.741499 1520543 addons.go:69] Setting default-storageclass=true in profile "multinode-898977"
	I1107 23:54:31.741521 1520543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-898977"
	I1107 23:54:31.741810 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:54:31.742133 1520543 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:54:31.768447 1520543 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1107 23:54:31.768470 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:31.768479 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:31 GMT
	I1107 23:54:31.768485 1520543 round_trippers.go:580]     Audit-Id: e49f50cd-8717-4a5e-b6d3-97f218a14b2d
	I1107 23:54:31.768491 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:31.768497 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:31.768505 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:31.768511 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:31.768517 1520543 round_trippers.go:580]     Content-Length: 291
	I1107 23:54:31.769401 1520543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"be9e3fe2-6b1e-44da-90ed-3147e5fd8faf","resourceVersion":"348","creationTimestamp":"2023-11-07T23:54:18Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:54:31.769803 1520543 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"be9e3fe2-6b1e-44da-90ed-3147e5fd8faf","resourceVersion":"348","creationTimestamp":"2023-11-07T23:54:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:54:31.769859 1520543 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:54:31.769865 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:31.769874 1520543 round_trippers.go:473]     Content-Type: application/json
	I1107 23:54:31.769881 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:31.769887 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:31.781423 1520543 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1107 23:54:31.781445 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:31.781470 1520543 round_trippers.go:580]     Audit-Id: 21f9d360-1a88-46ef-90b3-7392e066ba49
	I1107 23:54:31.781478 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:31.781484 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:31.781490 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:31.781496 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:31.781502 1520543 round_trippers.go:580]     Content-Length: 291
	I1107 23:54:31.781509 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:31 GMT
	I1107 23:54:31.781700 1520543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"be9e3fe2-6b1e-44da-90ed-3147e5fd8faf","resourceVersion":"349","creationTimestamp":"2023-11-07T23:54:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:54:31.781845 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:54:31.781854 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:31.781862 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:31.781869 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:31.783670 1520543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:54:31.783931 1520543 kapi.go:59] client config for multinode-898977: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:54:31.784182 1520543 addons.go:231] Setting addon default-storageclass=true in "multinode-898977"
	I1107 23:54:31.784213 1520543 host.go:66] Checking if "multinode-898977" exists ...
	I1107 23:54:31.784649 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:54:31.792491 1520543 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1107 23:54:31.792512 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:31.792520 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:31.792527 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:31.792534 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:31.792540 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:31.792546 1520543 round_trippers.go:580]     Content-Length: 291
	I1107 23:54:31.792553 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:31 GMT
	I1107 23:54:31.792559 1520543 round_trippers.go:580]     Audit-Id: ee655281-0963-42bc-99c3-3e7b2bb6b07c
	I1107 23:54:31.792581 1520543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"be9e3fe2-6b1e-44da-90ed-3147e5fd8faf","resourceVersion":"349","creationTimestamp":"2023-11-07T23:54:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:54:31.792672 1520543 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-898977" context rescaled to 1 replicas
	I1107 23:54:31.792696 1520543 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:54:31.794508 1520543 out.go:177] * Verifying Kubernetes components...
	I1107 23:54:31.796476 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:54:31.810024 1520543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:54:31.812229 1520543 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:54:31.812251 1520543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:54:31.812319 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:54:31.822753 1520543 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:54:31.822782 1520543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:54:31.822846 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:54:31.861056 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:54:31.884739 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:54:31.998010 1520543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:54:32.002349 1520543 command_runner.go:130] > apiVersion: v1
	I1107 23:54:32.002375 1520543 command_runner.go:130] > data:
	I1107 23:54:32.002380 1520543 command_runner.go:130] >   Corefile: |
	I1107 23:54:32.002385 1520543 command_runner.go:130] >     .:53 {
	I1107 23:54:32.002390 1520543 command_runner.go:130] >         errors
	I1107 23:54:32.002396 1520543 command_runner.go:130] >         health {
	I1107 23:54:32.002402 1520543 command_runner.go:130] >            lameduck 5s
	I1107 23:54:32.002406 1520543 command_runner.go:130] >         }
	I1107 23:54:32.002411 1520543 command_runner.go:130] >         ready
	I1107 23:54:32.002419 1520543 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1107 23:54:32.002430 1520543 command_runner.go:130] >            pods insecure
	I1107 23:54:32.002438 1520543 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1107 23:54:32.002443 1520543 command_runner.go:130] >            ttl 30
	I1107 23:54:32.002450 1520543 command_runner.go:130] >         }
	I1107 23:54:32.002456 1520543 command_runner.go:130] >         prometheus :9153
	I1107 23:54:32.002468 1520543 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1107 23:54:32.002474 1520543 command_runner.go:130] >            max_concurrent 1000
	I1107 23:54:32.002482 1520543 command_runner.go:130] >         }
	I1107 23:54:32.002488 1520543 command_runner.go:130] >         cache 30
	I1107 23:54:32.002493 1520543 command_runner.go:130] >         loop
	I1107 23:54:32.002497 1520543 command_runner.go:130] >         reload
	I1107 23:54:32.002502 1520543 command_runner.go:130] >         loadbalance
	I1107 23:54:32.002507 1520543 command_runner.go:130] >     }
	I1107 23:54:32.002520 1520543 command_runner.go:130] > kind: ConfigMap
	I1107 23:54:32.002524 1520543 command_runner.go:130] > metadata:
	I1107 23:54:32.002533 1520543 command_runner.go:130] >   creationTimestamp: "2023-11-07T23:54:18Z"
	I1107 23:54:32.002540 1520543 command_runner.go:130] >   name: coredns
	I1107 23:54:32.002546 1520543 command_runner.go:130] >   namespace: kube-system
	I1107 23:54:32.002553 1520543 command_runner.go:130] >   resourceVersion: "229"
	I1107 23:54:32.002560 1520543 command_runner.go:130] >   uid: 5f1145e7-645d-40da-a4f9-12705cda824e
	I1107 23:54:32.006719 1520543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:54:32.007003 1520543 kapi.go:59] client config for multinode-898977: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:54:32.007271 1520543 node_ready.go:35] waiting up to 6m0s for node "multinode-898977" to be "Ready" ...
	I1107 23:54:32.007350 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:32.007361 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:32.007370 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:32.007377 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:32.007602 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:54:32.035227 1520543 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1107 23:54:32.035259 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:32.035268 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:32.035274 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:32.035281 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:32.035288 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:32.035296 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:32 GMT
	I1107 23:54:32.035302 1520543 round_trippers.go:580]     Audit-Id: c7babcce-0d5e-4cbb-b79f-23da03b0e52f
	I1107 23:54:32.035488 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:32.036222 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:32.036241 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:32.036251 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:32.036259 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:32.040139 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:54:32.040161 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:32.040170 1520543 round_trippers.go:580]     Audit-Id: c0e70e9d-fe19-4aef-a6c1-8fddeb283f78
	I1107 23:54:32.040176 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:32.040182 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:32.040188 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:32.040195 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:32.040211 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:32 GMT
	I1107 23:54:32.052944 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:32.063173 1520543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:54:32.553657 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:32.553728 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:32.553751 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:32.553773 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:32.580579 1520543 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I1107 23:54:32.580654 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:32.580676 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:32.580699 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:32.580734 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:32.580768 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:32 GMT
	I1107 23:54:32.580790 1520543 round_trippers.go:580]     Audit-Id: dd60c186-d9bc-4409-a6ac-01d3c7f0209a
	I1107 23:54:32.580822 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:32.582219 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:32.827033 1520543 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1107 23:54:32.839567 1520543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1107 23:54:32.880936 1520543 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1107 23:54:32.904747 1520543 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1107 23:54:32.914790 1520543 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1107 23:54:32.932878 1520543 command_runner.go:130] > pod/storage-provisioner created
	I1107 23:54:32.937810 1520543 command_runner.go:130] > configmap/coredns replaced
	I1107 23:54:32.937910 1520543 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1107 23:54:32.937955 1520543 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1107 23:54:32.938181 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1107 23:54:32.938189 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:32.938197 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:32.938204 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:32.944882 1520543 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 23:54:32.944956 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:32.945001 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:32 GMT
	I1107 23:54:32.945024 1520543 round_trippers.go:580]     Audit-Id: 76c1b7d4-1306-4a4f-835f-9aeb94b19a90
	I1107 23:54:32.945047 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:32.945080 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:32.945104 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:32.945127 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:32.945163 1520543 round_trippers.go:580]     Content-Length: 1273
	I1107 23:54:32.945282 1520543 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"373"},"items":[{"metadata":{"name":"standard","uid":"fe9e0dc4-7650-4cd3-8c16-a1d6e361f0a6","resourceVersion":"367","creationTimestamp":"2023-11-07T23:54:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:54:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1107 23:54:32.945809 1520543 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fe9e0dc4-7650-4cd3-8c16-a1d6e361f0a6","resourceVersion":"367","creationTimestamp":"2023-11-07T23:54:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:54:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 23:54:32.945909 1520543 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1107 23:54:32.945934 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:32.945966 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:32.946082 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:32.946107 1520543 round_trippers.go:473]     Content-Type: application/json
	I1107 23:54:32.953173 1520543 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1107 23:54:32.953244 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:32.953267 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:32.953288 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:32.953323 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:32.953347 1520543 round_trippers.go:580]     Content-Length: 1220
	I1107 23:54:32.953369 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:32 GMT
	I1107 23:54:32.953405 1520543 round_trippers.go:580]     Audit-Id: 6ac53766-2cff-4837-9901-54458b63f20a
	I1107 23:54:32.953428 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:32.953556 1520543 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fe9e0dc4-7650-4cd3-8c16-a1d6e361f0a6","resourceVersion":"367","creationTimestamp":"2023-11-07T23:54:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:54:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 23:54:32.958822 1520543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 23:54:32.961711 1520543 addons.go:502] enable addons completed in 1.221270762s: enabled=[storage-provisioner default-storageclass]
	I1107 23:54:33.054384 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:33.054415 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:33.054436 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:33.054444 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:33.057399 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:33.057426 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:33.057436 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:33.057443 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:33.057449 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:33 GMT
	I1107 23:54:33.057463 1520543 round_trippers.go:580]     Audit-Id: 1de96b71-ab47-4877-8040-fe08ff2f31fb
	I1107 23:54:33.057469 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:33.057482 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:33.058010 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:33.553624 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:33.553651 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:33.553661 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:33.553668 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:33.556317 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:33.556348 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:33.556358 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:33.556365 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:33.556371 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:33.556378 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:33 GMT
	I1107 23:54:33.556384 1520543 round_trippers.go:580]     Audit-Id: eab65b80-f256-4a7c-9003-818b132ea506
	I1107 23:54:33.556391 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:33.556534 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:34.054380 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:34.054408 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:34.054418 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:34.054426 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:34.057102 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:34.057125 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:34.057134 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:34.057140 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:34 GMT
	I1107 23:54:34.057146 1520543 round_trippers.go:580]     Audit-Id: 0a1465bf-5a93-4efc-b462-b000a8dd5ddf
	I1107 23:54:34.057152 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:34.057158 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:34.057164 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:34.057737 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:34.058198 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:34.554417 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:34.554464 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:34.554475 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:34.554482 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:34.557181 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:34.557207 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:34.557216 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:34.557222 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:34.557229 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:34.557235 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:34 GMT
	I1107 23:54:34.557241 1520543 round_trippers.go:580]     Audit-Id: 057950c6-6779-48de-b285-fa1d6965d129
	I1107 23:54:34.557248 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:34.557453 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:35.054646 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:35.054677 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:35.054688 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:35.054696 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:35.057427 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:35.057457 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:35.057466 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:35.057473 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:35.057480 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:35.057486 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:35.057525 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:35 GMT
	I1107 23:54:35.057547 1520543 round_trippers.go:580]     Audit-Id: 906b4218-daf7-4cf1-91c9-aca7d7f4800b
	I1107 23:54:35.057741 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:35.553669 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:35.553696 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:35.553706 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:35.553714 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:35.556233 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:35.556259 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:35.556268 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:35 GMT
	I1107 23:54:35.556278 1520543 round_trippers.go:580]     Audit-Id: d3046092-87e4-46fa-801d-68d7af1d3b06
	I1107 23:54:35.556285 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:35.556291 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:35.556301 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:35.556310 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:35.556519 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:36.053627 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:36.053652 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:36.053663 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:36.053671 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:36.056721 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:54:36.056752 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:36.056761 1520543 round_trippers.go:580]     Audit-Id: 353a4728-f2ae-4e2f-bf05-908c6ec9a1b1
	I1107 23:54:36.056767 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:36.056774 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:36.056779 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:36.056786 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:36.056793 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:36 GMT
	I1107 23:54:36.056974 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:36.554053 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:36.554080 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:36.554089 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:36.554096 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:36.556862 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:36.556896 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:36.556905 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:36.556912 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:36 GMT
	I1107 23:54:36.556919 1520543 round_trippers.go:580]     Audit-Id: f55b58ec-5d9d-441a-82d5-50f32f0b0106
	I1107 23:54:36.556929 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:36.556936 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:36.556942 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:36.557050 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:36.557460 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:37.053621 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:37.053649 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:37.053659 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:37.053666 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:37.056284 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:37.056309 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:37.056317 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:37.056324 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:37.056331 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:37 GMT
	I1107 23:54:37.056337 1520543 round_trippers.go:580]     Audit-Id: bd3a9b33-cdb4-45ff-a400-eb36e4d2930e
	I1107 23:54:37.056343 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:37.056348 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:37.056512 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:37.553698 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:37.553724 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:37.553734 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:37.553741 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:37.556249 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:37.556271 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:37.556279 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:37.556285 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:37.556292 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:37.556298 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:37 GMT
	I1107 23:54:37.556304 1520543 round_trippers.go:580]     Audit-Id: d0627971-b203-4594-a5b2-75c34ee37198
	I1107 23:54:37.556310 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:37.556429 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:38.055581 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:38.055621 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:38.055642 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:38.055650 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:38.058573 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:38.058599 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:38.058608 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:38.058616 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:38.058622 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:38.058628 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:38 GMT
	I1107 23:54:38.058634 1520543 round_trippers.go:580]     Audit-Id: 74593355-98cb-436d-9a56-9b997a8a0337
	I1107 23:54:38.058641 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:38.058802 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:38.554099 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:38.554131 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:38.554142 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:38.554149 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:38.556598 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:38.556625 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:38.556635 1520543 round_trippers.go:580]     Audit-Id: 87ccdb60-c6f9-4aab-8c39-6070352d0d42
	I1107 23:54:38.556642 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:38.556648 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:38.556654 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:38.556660 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:38.556671 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:38 GMT
	I1107 23:54:38.557021 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:39.053713 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:39.053739 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:39.053749 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:39.053756 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:39.056532 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:39.056561 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:39.056571 1520543 round_trippers.go:580]     Audit-Id: 55cc3181-c052-4fef-a6d9-9a5bbb8e40e9
	I1107 23:54:39.056578 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:39.056584 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:39.056590 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:39.056597 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:39.056603 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:39 GMT
	I1107 23:54:39.056834 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:39.057321 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:39.553684 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:39.553710 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:39.553720 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:39.553728 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:39.556595 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:39.556628 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:39.556640 1520543 round_trippers.go:580]     Audit-Id: 2fab6f53-bf42-4084-9b1b-5d50c7d4782b
	I1107 23:54:39.556651 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:39.556668 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:39.556674 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:39.556681 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:39.556688 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:39 GMT
	I1107 23:54:39.556844 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:40.054596 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:40.054636 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:40.054649 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:40.054656 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:40.057533 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:40.057557 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:40.057566 1520543 round_trippers.go:580]     Audit-Id: f3178618-36c8-42ed-bac8-b73b0575e2a7
	I1107 23:54:40.057573 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:40.057579 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:40.057585 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:40.057592 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:40.057598 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:40 GMT
	I1107 23:54:40.057781 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:40.553693 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:40.553715 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:40.553726 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:40.553735 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:40.556355 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:40.556381 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:40.556390 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:40.556397 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:40.556404 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:40.556412 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:40 GMT
	I1107 23:54:40.556418 1520543 round_trippers.go:580]     Audit-Id: eb518247-c06b-47bf-9e6c-d575bf07d28b
	I1107 23:54:40.556424 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:40.556595 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:41.053726 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:41.053752 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:41.053762 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:41.053769 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:41.056291 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:41.056314 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:41.056323 1520543 round_trippers.go:580]     Audit-Id: b1a56e40-def4-4c78-9c10-3e4df3e4f0dc
	I1107 23:54:41.056329 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:41.056335 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:41.056341 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:41.056348 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:41.056354 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:41 GMT
	I1107 23:54:41.056566 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:41.553685 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:41.553709 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:41.553719 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:41.553726 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:41.556477 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:41.556502 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:41.556510 1520543 round_trippers.go:580]     Audit-Id: a451a017-4987-42eb-a6a7-9b8a6ff8018a
	I1107 23:54:41.556517 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:41.556523 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:41.556529 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:41.556536 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:41.556542 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:41 GMT
	I1107 23:54:41.556743 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:41.557159 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:42.054054 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:42.054076 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:42.054086 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:42.054094 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:42.056796 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:42.056822 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:42.056831 1520543 round_trippers.go:580]     Audit-Id: c2ccc99e-9885-499e-b1ea-d4442f7db5c6
	I1107 23:54:42.056841 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:42.056848 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:42.056855 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:42.056861 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:42.056868 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:42 GMT
	I1107 23:54:42.057202 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:42.554418 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:42.554443 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:42.554453 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:42.554462 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:42.557064 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:42.557087 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:42.557095 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:42.557102 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:42.557108 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:42.557116 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:42.557134 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:42 GMT
	I1107 23:54:42.557140 1520543 round_trippers.go:580]     Audit-Id: 47cfb149-02fc-40a4-9524-0c30420ee789
	I1107 23:54:42.557542 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:43.054150 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:43.054177 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:43.054192 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:43.054199 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:43.056799 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:43.056823 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:43.056832 1520543 round_trippers.go:580]     Audit-Id: 6fda53d5-d4d2-4878-bc17-3c71f8e62a99
	I1107 23:54:43.056839 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:43.056845 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:43.056851 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:43.056857 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:43.056864 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:43 GMT
	I1107 23:54:43.057302 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:43.554481 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:43.554508 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:43.554518 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:43.554526 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:43.557618 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:54:43.557647 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:43.557655 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:43.557662 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:43.557668 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:43.557677 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:43.557684 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:43 GMT
	I1107 23:54:43.557691 1520543 round_trippers.go:580]     Audit-Id: 785c5a8b-1f4f-4527-9bf6-2f945d5be313
	I1107 23:54:43.557843 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:43.558377 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:44.054070 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:44.054096 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:44.054106 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:44.054113 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:44.056621 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:44.056647 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:44.056655 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:44 GMT
	I1107 23:54:44.056662 1520543 round_trippers.go:580]     Audit-Id: f667dcdc-5be6-4771-b93c-4013ceb60364
	I1107 23:54:44.056668 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:44.056674 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:44.056680 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:44.056686 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:44.056900 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:44.554042 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:44.554065 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:44.554079 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:44.554086 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:44.556852 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:44.556881 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:44.556889 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:44.556897 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:44 GMT
	I1107 23:54:44.556903 1520543 round_trippers.go:580]     Audit-Id: b9708907-ee6f-4670-a69c-60a52ca0a8e3
	I1107 23:54:44.556910 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:44.556918 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:44.556925 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:44.557047 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:45.054333 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:45.054363 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:45.054375 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:45.054393 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:45.058084 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:54:45.058125 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:45.058136 1520543 round_trippers.go:580]     Audit-Id: b95bac0f-0c0d-479d-bfb0-09dc989697df
	I1107 23:54:45.058143 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:45.058150 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:45.058157 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:45.058166 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:45.058174 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:45 GMT
	I1107 23:54:45.058342 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:45.553619 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:45.553647 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:45.553658 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:45.553665 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:45.556124 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:45.556148 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:45.556157 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:45.556164 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:45.556176 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:45.556188 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:45.556197 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:45 GMT
	I1107 23:54:45.556204 1520543 round_trippers.go:580]     Audit-Id: 290add97-766f-41c6-9fbb-4c83097aa21a
	I1107 23:54:45.556550 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:46.053725 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:46.053753 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:46.053763 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:46.053771 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:46.056712 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:46.056737 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:46.056763 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:46.056771 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:46.056778 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:46.056784 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:46.056794 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:46 GMT
	I1107 23:54:46.056802 1520543 round_trippers.go:580]     Audit-Id: 237d6630-5d57-442d-bbbb-61983a2ff448
	I1107 23:54:46.057324 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:46.057723 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:46.553878 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:46.553917 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:46.553927 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:46.553933 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:46.556591 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:46.556611 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:46.556619 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:46 GMT
	I1107 23:54:46.556626 1520543 round_trippers.go:580]     Audit-Id: d1a15c2b-50d7-4127-b8da-c47651b1e2a2
	I1107 23:54:46.556632 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:46.556638 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:46.556647 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:46.556654 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:46.556831 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:47.053916 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:47.053944 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:47.053955 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:47.053965 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:47.056733 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:47.056818 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:47.056836 1520543 round_trippers.go:580]     Audit-Id: 978514d0-2e06-4235-851a-a8464cb2532a
	I1107 23:54:47.056844 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:47.056851 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:47.056857 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:47.056863 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:47.056887 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:47 GMT
	I1107 23:54:47.057022 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:47.554527 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:47.554553 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:47.554562 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:47.554569 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:47.557058 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:47.557080 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:47.557088 1520543 round_trippers.go:580]     Audit-Id: 74953d58-3dc4-40e8-9f06-ebc559fa962c
	I1107 23:54:47.557095 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:47.557102 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:47.557108 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:47.557114 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:47.557121 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:47 GMT
	I1107 23:54:47.557233 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:48.054558 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:48.054595 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:48.054611 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:48.054620 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:48.057578 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:48.057607 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:48.057617 1520543 round_trippers.go:580]     Audit-Id: be59e095-5ba0-44d2-84bb-429f8364158c
	I1107 23:54:48.057623 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:48.057630 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:48.057636 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:48.057642 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:48.057649 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:48 GMT
	I1107 23:54:48.057806 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:48.058284 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:48.553934 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:48.553959 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:48.553969 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:48.553991 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:48.556404 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:48.556430 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:48.556438 1520543 round_trippers.go:580]     Audit-Id: df36b070-906e-4c07-b98d-75ebabe4ab5e
	I1107 23:54:48.556444 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:48.556451 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:48.556457 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:48.556463 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:48.556469 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:48 GMT
	I1107 23:54:48.556586 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:49.053701 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:49.053727 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:49.053737 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:49.053744 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:49.056385 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:49.056414 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:49.056423 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:49.056429 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:49.056463 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:49 GMT
	I1107 23:54:49.056475 1520543 round_trippers.go:580]     Audit-Id: e043f55d-66fb-4521-9dd0-d6bbf52b696c
	I1107 23:54:49.056481 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:49.056488 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:49.056847 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:49.554535 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:49.554563 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:49.554574 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:49.554582 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:49.557288 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:49.557322 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:49.557332 1520543 round_trippers.go:580]     Audit-Id: f8d968bb-e825-4945-aa7a-ba03aa292274
	I1107 23:54:49.557338 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:49.557344 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:49.557351 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:49.557357 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:49.557364 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:49 GMT
	I1107 23:54:49.557492 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:50.053918 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:50.053996 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:50.054008 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:50.054022 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:50.056666 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:50.056692 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:50.056702 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:50.056708 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:50.056714 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:50.056722 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:50.056729 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:50 GMT
	I1107 23:54:50.056736 1520543 round_trippers.go:580]     Audit-Id: 9969de85-a78a-4084-946c-f0d9f92b765a
	I1107 23:54:50.057108 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:50.553723 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:50.553750 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:50.553760 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:50.553767 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:50.556343 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:50.556446 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:50.556471 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:50.556504 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:50.556531 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:50 GMT
	I1107 23:54:50.556543 1520543 round_trippers.go:580]     Audit-Id: 6aa6c8e5-690f-4cec-aec9-c0bf083eb977
	I1107 23:54:50.556550 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:50.556556 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:50.556662 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:50.557086 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:51.054095 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:51.054121 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:51.054131 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:51.054139 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:51.056849 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:51.056874 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:51.056883 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:51 GMT
	I1107 23:54:51.056890 1520543 round_trippers.go:580]     Audit-Id: d3bb63c2-ef41-4707-b115-f0100e0432c6
	I1107 23:54:51.056896 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:51.056903 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:51.056909 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:51.056919 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:51.057067 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:51.554438 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:51.554462 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:51.554472 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:51.554485 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:51.557150 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:51.557175 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:51.557186 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:51.557193 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:51.557200 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:51 GMT
	I1107 23:54:51.557207 1520543 round_trippers.go:580]     Audit-Id: feb39fdc-93e4-4352-9f88-b818de1def79
	I1107 23:54:51.557213 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:51.557222 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:51.557479 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:52.054266 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:52.054292 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:52.054313 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:52.054323 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:52.056916 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:52.056946 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:52.056956 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:52 GMT
	I1107 23:54:52.056963 1520543 round_trippers.go:580]     Audit-Id: c18c1f56-56a1-4664-958f-9bb26081315c
	I1107 23:54:52.056969 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:52.056976 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:52.056982 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:52.056988 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:52.057250 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:52.554410 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:52.554435 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:52.554446 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:52.554453 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:52.557017 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:52.557048 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:52.557057 1520543 round_trippers.go:580]     Audit-Id: 62420054-4b3c-4f70-9726-527ae458be14
	I1107 23:54:52.557063 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:52.557070 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:52.557076 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:52.557083 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:52.557090 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:52 GMT
	I1107 23:54:52.557218 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:52.557614 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:53.054378 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:53.054402 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:53.054412 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:53.054419 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:53.056958 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:53.056979 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:53.056988 1520543 round_trippers.go:580]     Audit-Id: ea650793-1f57-4d28-8be1-013b5bfbd949
	I1107 23:54:53.056994 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:53.057001 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:53.057009 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:53.057015 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:53.057023 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:53 GMT
	I1107 23:54:53.057162 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:53.553657 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:53.553687 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:53.553697 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:53.553704 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:53.556247 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:53.556272 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:53.556281 1520543 round_trippers.go:580]     Audit-Id: 9a40d105-ce6e-4b5d-bcab-bd797013bfca
	I1107 23:54:53.556287 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:53.556294 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:53.556300 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:53.556306 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:53.556313 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:53 GMT
	I1107 23:54:53.556630 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:54.053685 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:54.053713 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:54.053724 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:54.053732 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:54.056318 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:54.056354 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:54.056363 1520543 round_trippers.go:580]     Audit-Id: f6491416-ec53-4e93-ad74-8c325c485ce2
	I1107 23:54:54.056370 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:54.056376 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:54.056383 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:54.056389 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:54.056396 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:54 GMT
	I1107 23:54:54.056575 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:54.553644 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:54.553669 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:54.553680 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:54.553687 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:54.556452 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:54.556474 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:54.556483 1520543 round_trippers.go:580]     Audit-Id: c29146bc-0e85-42f9-b7e7-fbdb2379a8f7
	I1107 23:54:54.556489 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:54.556495 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:54.556502 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:54.556510 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:54.556516 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:54 GMT
	I1107 23:54:54.556701 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:55.053700 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:55.053727 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:55.053737 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:55.053746 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:55.056338 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:55.056361 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:55.056370 1520543 round_trippers.go:580]     Audit-Id: 2e8003b4-7df5-4524-aa09-d1102b0e7329
	I1107 23:54:55.056377 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:55.056383 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:55.056389 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:55.056395 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:55.056402 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:55 GMT
	I1107 23:54:55.056569 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:55.057028 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:55.553708 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:55.553736 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:55.553746 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:55.553754 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:55.556373 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:55.556395 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:55.556413 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:55 GMT
	I1107 23:54:55.556420 1520543 round_trippers.go:580]     Audit-Id: fa8b643e-6ec2-4b6d-9750-76d085fd05cb
	I1107 23:54:55.556426 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:55.556432 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:55.556438 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:55.556444 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:55.556582 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:56.054372 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:56.054398 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:56.054408 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:56.054415 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:56.057252 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:56.057281 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:56.057289 1520543 round_trippers.go:580]     Audit-Id: 0fb5f3ce-cc86-40db-b517-5190ab21acee
	I1107 23:54:56.057296 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:56.057303 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:56.057309 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:56.057316 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:56.057323 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:56 GMT
	I1107 23:54:56.057489 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:56.553943 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:56.553972 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:56.554002 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:56.554009 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:56.556424 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:56.556450 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:56.556459 1520543 round_trippers.go:580]     Audit-Id: ed08ba1d-a74f-413f-a930-d6b0289155fb
	I1107 23:54:56.556466 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:56.556472 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:56.556478 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:56.556484 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:56.556491 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:56 GMT
	I1107 23:54:56.556611 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:57.053702 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:57.053726 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:57.053736 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:57.053744 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:57.056434 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:57.056459 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:57.056467 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:57.056474 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:57 GMT
	I1107 23:54:57.056481 1520543 round_trippers.go:580]     Audit-Id: 6a060776-ba4b-4750-a175-9f628f561fe5
	I1107 23:54:57.056487 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:57.056494 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:57.056500 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:57.056656 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:57.057105 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:57.553820 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:57.553846 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:57.553856 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:57.553865 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:57.556335 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:57.556361 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:57.556370 1520543 round_trippers.go:580]     Audit-Id: 3cdc11c5-f5d0-4b56-b933-309423c3cb85
	I1107 23:54:57.556377 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:57.556383 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:57.556389 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:57.556395 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:57.556403 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:57 GMT
	I1107 23:54:57.556520 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:58.053620 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:58.053647 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:58.053657 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:58.053664 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:58.056295 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:58.056318 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:58.056326 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:58.056333 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:58.056339 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:58 GMT
	I1107 23:54:58.056345 1520543 round_trippers.go:580]     Audit-Id: 823b52d1-d3d2-4c5c-a05a-be15091780df
	I1107 23:54:58.056352 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:58.056358 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:58.056487 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:58.554612 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:58.554639 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:58.554649 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:58.554657 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:58.557364 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:58.557392 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:58.557401 1520543 round_trippers.go:580]     Audit-Id: bb7d5465-f7cc-49ee-abfe-3f112e1298a0
	I1107 23:54:58.557407 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:58.557413 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:58.557419 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:58.557426 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:58.557432 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:58 GMT
	I1107 23:54:58.557518 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:59.054441 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:59.054501 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:59.054511 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:59.054518 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:59.056952 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:59.056978 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:59.056986 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:59.056993 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:59.056999 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:59.057005 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:59.057011 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:59 GMT
	I1107 23:54:59.057017 1520543 round_trippers.go:580]     Audit-Id: f23f1ca7-f014-4a26-83fa-a00fb56fc48f
	I1107 23:54:59.057148 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:54:59.057533 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:54:59.554338 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:54:59.554369 1520543 round_trippers.go:469] Request Headers:
	I1107 23:54:59.554379 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:54:59.554387 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:54:59.556987 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:54:59.557016 1520543 round_trippers.go:577] Response Headers:
	I1107 23:54:59.557024 1520543 round_trippers.go:580]     Audit-Id: 3b6f9047-146d-4b7d-8ddc-4162b47c2a9c
	I1107 23:54:59.557031 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:54:59.557038 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:54:59.557044 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:54:59.557051 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:54:59.557057 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:54:59 GMT
	I1107 23:54:59.557211 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:00.054416 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:00.054444 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:00.054456 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:00.054463 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:00.057533 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:00.057561 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:00.057572 1520543 round_trippers.go:580]     Audit-Id: 82ddab40-8c2f-4a09-a6f7-eb07ba6e48bc
	I1107 23:55:00.057579 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:00.057585 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:00.057599 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:00.057606 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:00.057615 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:00 GMT
	I1107 23:55:00.058213 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:00.554484 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:00.554508 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:00.554518 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:00.554530 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:00.557404 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:00.557435 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:00.557444 1520543 round_trippers.go:580]     Audit-Id: a8e3c104-f5a6-4f72-b93d-d8f03f2f8fe1
	I1107 23:55:00.557451 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:00.557457 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:00.557463 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:00.557469 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:00.557476 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:00 GMT
	I1107 23:55:00.557590 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:01.053833 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:01.053867 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:01.053878 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:01.053888 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:01.056614 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:01.056641 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:01.056650 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:01.056656 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:01.056663 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:01 GMT
	I1107 23:55:01.056669 1520543 round_trippers.go:580]     Audit-Id: 094d5262-eb06-49ab-a13d-3961a0487a76
	I1107 23:55:01.056675 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:01.056682 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:01.056823 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:01.554168 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:01.554202 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:01.554222 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:01.554231 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:01.556842 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:01.556867 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:01.556876 1520543 round_trippers.go:580]     Audit-Id: 1d28200b-a088-4470-a886-106792d644b6
	I1107 23:55:01.556883 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:01.556890 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:01.556896 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:01.556902 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:01.556909 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:01 GMT
	I1107 23:55:01.557076 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:01.557480 1520543 node_ready.go:58] node "multinode-898977" has status "Ready":"False"
	I1107 23:55:02.054429 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:02.054453 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:02.054462 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:02.054469 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:02.057111 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:02.057147 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:02.057156 1520543 round_trippers.go:580]     Audit-Id: 283afe9c-95bf-4595-a6a0-e063875d15dc
	I1107 23:55:02.057163 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:02.057172 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:02.057178 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:02.057184 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:02.057190 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:02 GMT
	I1107 23:55:02.057305 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:02.554561 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:02.554595 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:02.554606 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:02.554614 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:02.557549 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:02.557574 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:02.557583 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:02.557590 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:02 GMT
	I1107 23:55:02.557597 1520543 round_trippers.go:580]     Audit-Id: d362c415-b9f1-4f52-963e-68b5762c0640
	I1107 23:55:02.557603 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:02.557609 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:02.557615 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:02.557713 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"291","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1107 23:55:03.054539 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:03.054569 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.054579 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.054587 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.060569 1520543 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:55:03.060602 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.060613 1520543 round_trippers.go:580]     Audit-Id: 64d255d3-ec10-4b0f-8a7e-5a36f10d9fca
	I1107 23:55:03.060621 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.060628 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.060635 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.060642 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.060656 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.061019 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:03.061441 1520543 node_ready.go:49] node "multinode-898977" has status "Ready":"True"
	I1107 23:55:03.061464 1520543 node_ready.go:38] duration metric: took 31.054167564s waiting for node "multinode-898977" to be "Ready" ...
	I1107 23:55:03.061475 1520543 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:03.061551 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:55:03.061561 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.061570 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.061578 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.066885 1520543 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:55:03.066920 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.066929 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.066937 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.066943 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.066950 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.066956 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.066963 1520543 round_trippers.go:580]     Audit-Id: 6ee7acbe-f953-454a-b61f-c77d5ebb8db6
	I1107 23:55:03.067431 1520543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"397","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52580 chars]
	I1107 23:55:03.071645 1520543 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5822m" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:03.071814 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5822m
	I1107 23:55:03.071840 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.071862 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.071869 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.078636 1520543 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 23:55:03.078663 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.078672 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.078679 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.078685 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.078692 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.078698 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.078705 1520543 round_trippers.go:580]     Audit-Id: 03181492-e53d-4f81-b07b-9a38981e862c
	I1107 23:55:03.079254 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"401","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:55:03.079853 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:03.079880 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.079891 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.079898 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.086604 1520543 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 23:55:03.086627 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.086635 1520543 round_trippers.go:580]     Audit-Id: 527816ed-a2ab-450c-add5-1a087dd465f1
	I1107 23:55:03.086642 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.086648 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.086655 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.086662 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.086668 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.086910 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:03.087361 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5822m
	I1107 23:55:03.087378 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.087387 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.087394 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.091200 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:03.091229 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.091239 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.091246 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.091253 1520543 round_trippers.go:580]     Audit-Id: 792291d5-1f84-478c-87b6-cb195234c1e2
	I1107 23:55:03.091259 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.091265 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.091271 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.092359 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"401","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:55:03.093006 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:03.093020 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.093030 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.093036 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.095604 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:03.095629 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.095646 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.095653 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.095659 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.095666 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.095672 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.095678 1520543 round_trippers.go:580]     Audit-Id: 2c778788-7ad4-488b-b246-e10c94a11675
	I1107 23:55:03.095851 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:03.596883 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5822m
	I1107 23:55:03.596906 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.596916 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.596923 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.599364 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:03.599388 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.599397 1520543 round_trippers.go:580]     Audit-Id: eff7185e-1fe3-4719-95ac-12806201bb6d
	I1107 23:55:03.599404 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.599410 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.599416 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.599423 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.599430 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.599808 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"401","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:55:03.600347 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:03.600356 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:03.600365 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:03.600371 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:03.602747 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:03.602765 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:03.602774 1520543 round_trippers.go:580]     Audit-Id: b1892626-748a-4c8d-8169-27b2d6f06b72
	I1107 23:55:03.602781 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:03.602787 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:03.602814 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:03.602826 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:03.602832 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:03 GMT
	I1107 23:55:03.603151 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:04.096755 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5822m
	I1107 23:55:04.096783 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.096793 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.096800 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.099566 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.099589 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.099597 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.099604 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.099611 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.099617 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.099624 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.099630 1520543 round_trippers.go:580]     Audit-Id: 51fb33a5-d485-4a85-b0b4-62d8a868a97d
	I1107 23:55:04.100049 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"401","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:55:04.100631 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:04.100652 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.100662 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.100669 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.103330 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.103385 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.103405 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.103428 1520543 round_trippers.go:580]     Audit-Id: a374554e-4f7d-467f-bd55-72c2c3f78f5a
	I1107 23:55:04.103462 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.103486 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.103506 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.103527 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.103684 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:04.597171 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5822m
	I1107 23:55:04.597196 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.597207 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.597215 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.599758 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.599825 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.599848 1520543 round_trippers.go:580]     Audit-Id: 97b43d52-f964-4731-a333-a04fa8ba1b1e
	I1107 23:55:04.599871 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.599909 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.599917 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.599933 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.599940 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.600071 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"412","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1107 23:55:04.600625 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:04.600646 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.600655 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.600662 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.603082 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.603101 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.603110 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.603121 1520543 round_trippers.go:580]     Audit-Id: 551de36b-9f19-4284-bec3-5850f56af662
	I1107 23:55:04.603128 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.603133 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.603139 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.603145 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.603344 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:04.603772 1520543 pod_ready.go:92] pod "coredns-5dd5756b68-5822m" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:04.603785 1520543 pod_ready.go:81] duration metric: took 1.53210404s waiting for pod "coredns-5dd5756b68-5822m" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.603794 1520543 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.603852 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-898977
	I1107 23:55:04.603857 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.603865 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.603871 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.606156 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.606178 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.606189 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.606196 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.606202 1520543 round_trippers.go:580]     Audit-Id: d71d4bb8-ab33-4945-8c7a-bfaaa0de810c
	I1107 23:55:04.606209 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.606218 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.606229 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.606948 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-898977","namespace":"kube-system","uid":"f044e6fe-c11b-4c4c-86b9-4128bb0094a1","resourceVersion":"384","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"237353557045024b06e23bafb1a554bc","kubernetes.io/config.mirror":"237353557045024b06e23bafb1a554bc","kubernetes.io/config.seen":"2023-11-07T23:54:18.228048197Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1107 23:55:04.607428 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:04.607444 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.607453 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.607460 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.609790 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.609836 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.609873 1520543 round_trippers.go:580]     Audit-Id: e0f1e4fa-9313-4c44-bef8-189737bce10d
	I1107 23:55:04.609904 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.609925 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.609963 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.610013 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.610033 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.610213 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:04.610646 1520543 pod_ready.go:92] pod "etcd-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:04.610666 1520543 pod_ready.go:81] duration metric: took 6.864894ms waiting for pod "etcd-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.610680 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.610769 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-898977
	I1107 23:55:04.610776 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.610784 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.610790 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.613177 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.613197 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.613205 1520543 round_trippers.go:580]     Audit-Id: ed6b0d82-2cd3-44e2-afb9-ef2e8d5ac5c1
	I1107 23:55:04.613212 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.613217 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.613223 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.613236 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.613257 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.613429 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-898977","namespace":"kube-system","uid":"421e7824-45c9-4241-a678-ab9289aad2e2","resourceVersion":"385","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"79f694efd9903456acbe1877608e409c","kubernetes.io/config.mirror":"79f694efd9903456acbe1877608e409c","kubernetes.io/config.seen":"2023-11-07T23:54:18.228053793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1107 23:55:04.613961 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:04.614002 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.614012 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.614024 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.616381 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.616404 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.616413 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.616419 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.616425 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.616432 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.616438 1520543 round_trippers.go:580]     Audit-Id: 2227e034-63be-4b1a-911c-1d1aea5db4e7
	I1107 23:55:04.616449 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.616623 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:04.617015 1520543 pod_ready.go:92] pod "kube-apiserver-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:04.617037 1520543 pod_ready.go:81] duration metric: took 6.330417ms waiting for pod "kube-apiserver-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.617049 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.617120 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-898977
	I1107 23:55:04.617130 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.617138 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.617145 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.619502 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.619523 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.619531 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.619538 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.619545 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.619551 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.619564 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.619574 1520543 round_trippers.go:580]     Audit-Id: e4f619eb-fb44-4f2c-8598-fc0759aa640b
	I1107 23:55:04.619778 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-898977","namespace":"kube-system","uid":"f99e3f68-3118-43cc-b04a-e031a0b53897","resourceVersion":"386","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7ecb0666ede4cd952e8c73745dd34a88","kubernetes.io/config.mirror":"7ecb0666ede4cd952e8c73745dd34a88","kubernetes.io/config.seen":"2023-11-07T23:54:18.228055163Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1107 23:55:04.655564 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:04.655592 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.655616 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.655625 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.658669 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:04.658722 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.658732 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.658744 1520543 round_trippers.go:580]     Audit-Id: f92cf242-b9b7-405b-b14d-55fb7e64d809
	I1107 23:55:04.658760 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.658767 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.658783 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.658795 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.658966 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:04.659493 1520543 pod_ready.go:92] pod "kube-controller-manager-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:04.659518 1520543 pod_ready.go:81] duration metric: took 42.455208ms waiting for pod "kube-controller-manager-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.659532 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v949" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:04.854995 1520543 request.go:629] Waited for 195.363643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2v949
	I1107 23:55:04.855108 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2v949
	I1107 23:55:04.855121 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:04.855130 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:04.855137 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:04.857785 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:04.857814 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:04.857823 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:04.857830 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:04.857836 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:04 GMT
	I1107 23:55:04.857842 1520543 round_trippers.go:580]     Audit-Id: bacb22a7-a556-4e36-962b-ea11fff74da6
	I1107 23:55:04.857865 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:04.857880 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:04.858097 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2v949","generateName":"kube-proxy-","namespace":"kube-system","uid":"2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8","resourceVersion":"377","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3f9ab921-6f58-4e7b-af20-e65af3cf1e74","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3f9ab921-6f58-4e7b-af20-e65af3cf1e74\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1107 23:55:05.054974 1520543 request.go:629] Waited for 196.365366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:05.055097 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:05.055109 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:05.055119 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:05.055126 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:05.057875 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:05.057900 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:05.057908 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:05 GMT
	I1107 23:55:05.057915 1520543 round_trippers.go:580]     Audit-Id: ac510714-bf99-42e2-829d-8affbf44e5ab
	I1107 23:55:05.057931 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:05.057938 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:05.057945 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:05.057951 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:05.058104 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:05.058523 1520543 pod_ready.go:92] pod "kube-proxy-2v949" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:05.058539 1520543 pod_ready.go:81] duration metric: took 398.997141ms waiting for pod "kube-proxy-2v949" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:05.058555 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:05.255017 1520543 request.go:629] Waited for 196.392148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-898977
	I1107 23:55:05.255120 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-898977
	I1107 23:55:05.255127 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:05.255155 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:05.255191 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:05.257953 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:05.258076 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:05.258092 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:05 GMT
	I1107 23:55:05.258100 1520543 round_trippers.go:580]     Audit-Id: b5c15861-275d-45b5-a79e-97653bf349b4
	I1107 23:55:05.258106 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:05.258112 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:05.258122 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:05.258138 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:05.258294 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-898977","namespace":"kube-system","uid":"7845b1ea-a5fd-4e03-8157-ae59da7d6651","resourceVersion":"383","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad918cb511f596afccb23f4947338cf7","kubernetes.io/config.mirror":"ad918cb511f596afccb23f4947338cf7","kubernetes.io/config.seen":"2023-11-07T23:54:18.228056254Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1107 23:55:05.455132 1520543 request.go:629] Waited for 196.351065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:05.455234 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:05.455271 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:05.455287 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:05.455295 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:05.457953 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:05.458135 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:05.458179 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:05 GMT
	I1107 23:55:05.458187 1520543 round_trippers.go:580]     Audit-Id: dc8175c7-ee56-4811-b5b3-2005a6681218
	I1107 23:55:05.458193 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:05.458211 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:05.458224 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:05.458230 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:05.458351 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:05.458748 1520543 pod_ready.go:92] pod "kube-scheduler-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:05.458767 1520543 pod_ready.go:81] duration metric: took 400.202448ms waiting for pod "kube-scheduler-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:05.458780 1520543 pod_ready.go:38] duration metric: took 2.397287485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:05.458797 1520543 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:55:05.458855 1520543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:05.471651 1520543 command_runner.go:130] > 1268
	I1107 23:55:05.473073 1520543 api_server.go:72] duration metric: took 33.680350652s to wait for apiserver process to appear ...
	I1107 23:55:05.473095 1520543 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:55:05.473111 1520543 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 23:55:05.482912 1520543 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 23:55:05.483014 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1107 23:55:05.483040 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:05.483056 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:05.483063 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:05.484242 1520543 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:55:05.484264 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:05.484272 1520543 round_trippers.go:580]     Content-Length: 264
	I1107 23:55:05.484279 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:05 GMT
	I1107 23:55:05.484285 1520543 round_trippers.go:580]     Audit-Id: 9385f9e6-d83d-4651-b503-7646f02c126c
	I1107 23:55:05.484291 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:05.484299 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:05.484305 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:05.484315 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:05.484334 1520543 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1107 23:55:05.484436 1520543 api_server.go:141] control plane version: v1.28.3
	I1107 23:55:05.484456 1520543 api_server.go:131] duration metric: took 11.35558ms to wait for apiserver health ...
	I1107 23:55:05.484464 1520543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:55:05.654879 1520543 request.go:629] Waited for 170.332781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:55:05.654971 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:55:05.654984 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:05.654994 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:05.655002 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:05.658562 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:05.658623 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:05.658644 1520543 round_trippers.go:580]     Audit-Id: 86f3c74d-24bf-48c1-ae4e-b58a7251b967
	I1107 23:55:05.658670 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:05.658683 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:05.658689 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:05.658712 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:05.658723 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:05 GMT
	I1107 23:55:05.659704 1520543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"412","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1107 23:55:05.662216 1520543 system_pods.go:59] 8 kube-system pods found
	I1107 23:55:05.662247 1520543 system_pods.go:61] "coredns-5dd5756b68-5822m" [0946267b-9eb0-42c0-8451-34a99c6055fa] Running
	I1107 23:55:05.662254 1520543 system_pods.go:61] "etcd-multinode-898977" [f044e6fe-c11b-4c4c-86b9-4128bb0094a1] Running
	I1107 23:55:05.662259 1520543 system_pods.go:61] "kindnet-6hghf" [12c0dff2-21a3-435f-aef2-d2201a778bc8] Running
	I1107 23:55:05.662264 1520543 system_pods.go:61] "kube-apiserver-multinode-898977" [421e7824-45c9-4241-a678-ab9289aad2e2] Running
	I1107 23:55:05.662270 1520543 system_pods.go:61] "kube-controller-manager-multinode-898977" [f99e3f68-3118-43cc-b04a-e031a0b53897] Running
	I1107 23:55:05.662274 1520543 system_pods.go:61] "kube-proxy-2v949" [2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8] Running
	I1107 23:55:05.662279 1520543 system_pods.go:61] "kube-scheduler-multinode-898977" [7845b1ea-a5fd-4e03-8157-ae59da7d6651] Running
	I1107 23:55:05.662284 1520543 system_pods.go:61] "storage-provisioner" [1e92762e-f03a-4e20-9228-9a7ee152c9d1] Running
	I1107 23:55:05.662293 1520543 system_pods.go:74] duration metric: took 177.824574ms to wait for pod list to return data ...
	I1107 23:55:05.662301 1520543 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:55:05.854631 1520543 request.go:629] Waited for 192.2481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:55:05.854720 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:55:05.854748 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:05.854759 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:05.854772 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:05.857223 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:05.857245 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:05.857253 1520543 round_trippers.go:580]     Audit-Id: a480527d-28aa-49b1-aa78-1bff980e76e6
	I1107 23:55:05.857260 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:05.857266 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:05.857291 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:05.857305 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:05.857313 1520543 round_trippers.go:580]     Content-Length: 261
	I1107 23:55:05.857321 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:05 GMT
	I1107 23:55:05.857342 1520543 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"994ea8de-ed06-49da-b97d-b63adeae25b6","resourceVersion":"296","creationTimestamp":"2023-11-07T23:54:31Z"}}]}
	I1107 23:55:05.857611 1520543 default_sa.go:45] found service account: "default"
	I1107 23:55:05.857631 1520543 default_sa.go:55] duration metric: took 195.320788ms for default service account to be created ...
	I1107 23:55:05.857640 1520543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:55:06.055123 1520543 request.go:629] Waited for 197.37129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:55:06.055190 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:55:06.055196 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:06.055211 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:06.055219 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:06.058830 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:06.058855 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:06.058864 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:06.058871 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:06.058899 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:06 GMT
	I1107 23:55:06.058911 1520543 round_trippers.go:580]     Audit-Id: 935ba7b4-0c60-4935-ae3e-20df0ac0fd57
	I1107 23:55:06.058918 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:06.058924 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:06.059836 1520543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"412","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1107 23:55:06.062325 1520543 system_pods.go:86] 8 kube-system pods found
	I1107 23:55:06.062357 1520543 system_pods.go:89] "coredns-5dd5756b68-5822m" [0946267b-9eb0-42c0-8451-34a99c6055fa] Running
	I1107 23:55:06.062367 1520543 system_pods.go:89] "etcd-multinode-898977" [f044e6fe-c11b-4c4c-86b9-4128bb0094a1] Running
	I1107 23:55:06.062373 1520543 system_pods.go:89] "kindnet-6hghf" [12c0dff2-21a3-435f-aef2-d2201a778bc8] Running
	I1107 23:55:06.062378 1520543 system_pods.go:89] "kube-apiserver-multinode-898977" [421e7824-45c9-4241-a678-ab9289aad2e2] Running
	I1107 23:55:06.062384 1520543 system_pods.go:89] "kube-controller-manager-multinode-898977" [f99e3f68-3118-43cc-b04a-e031a0b53897] Running
	I1107 23:55:06.062389 1520543 system_pods.go:89] "kube-proxy-2v949" [2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8] Running
	I1107 23:55:06.062394 1520543 system_pods.go:89] "kube-scheduler-multinode-898977" [7845b1ea-a5fd-4e03-8157-ae59da7d6651] Running
	I1107 23:55:06.062399 1520543 system_pods.go:89] "storage-provisioner" [1e92762e-f03a-4e20-9228-9a7ee152c9d1] Running
	I1107 23:55:06.062410 1520543 system_pods.go:126] duration metric: took 204.761152ms to wait for k8s-apps to be running ...
	I1107 23:55:06.062420 1520543 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:55:06.062482 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:55:06.076735 1520543 system_svc.go:56] duration metric: took 14.303904ms WaitForService to wait for kubelet.
	I1107 23:55:06.076770 1520543 kubeadm.go:581] duration metric: took 34.284052618s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:55:06.076789 1520543 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:55:06.255275 1520543 request.go:629] Waited for 178.360316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1107 23:55:06.255347 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1107 23:55:06.255361 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:06.255371 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:06.255382 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:06.258483 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:06.258516 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:06.258525 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:06.258532 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:06.258570 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:06.258582 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:06.258596 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:06 GMT
	I1107 23:55:06.258609 1520543 round_trippers.go:580]     Audit-Id: 7b5b32bc-0947-4ed5-8ab7-62fb13b81edb
	I1107 23:55:06.258738 1520543 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1107 23:55:06.259259 1520543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:55:06.259295 1520543 node_conditions.go:123] node cpu capacity is 2
	I1107 23:55:06.259310 1520543 node_conditions.go:105] duration metric: took 182.487231ms to run NodePressure ...
	I1107 23:55:06.259321 1520543 start.go:228] waiting for startup goroutines ...
	I1107 23:55:06.259331 1520543 start.go:233] waiting for cluster config update ...
	I1107 23:55:06.259341 1520543 start.go:242] writing updated cluster config ...
	I1107 23:55:06.262857 1520543 out.go:177] 
	I1107 23:55:06.265584 1520543 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:55:06.265694 1520543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/config.json ...
	I1107 23:55:06.268786 1520543 out.go:177] * Starting worker node multinode-898977-m02 in cluster multinode-898977
	I1107 23:55:06.271343 1520543 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:55:06.274159 1520543 out.go:177] * Pulling base image ...
	I1107 23:55:06.277433 1520543 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:55:06.277473 1520543 cache.go:56] Caching tarball of preloaded images
	I1107 23:55:06.277577 1520543 preload.go:174] Found /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1107 23:55:06.277594 1520543 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:55:06.277714 1520543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/config.json ...
	I1107 23:55:06.277941 1520543 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:55:06.295148 1520543 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:55:06.295203 1520543 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:55:06.295223 1520543 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:55:06.295270 1520543 start.go:365] acquiring machines lock for multinode-898977-m02: {Name:mk838ca98e9420fc483546f96659acf787d3a47c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:55:06.295444 1520543 start.go:369] acquired machines lock for "multinode-898977-m02" in 150.055µs
	I1107 23:55:06.295483 1520543 start.go:93] Provisioning new machine with config: &{Name:multinode-898977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:55:06.295570 1520543 start.go:125] createHost starting for "m02" (driver="docker")
	I1107 23:55:06.300416 1520543 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 23:55:06.300550 1520543 start.go:159] libmachine.API.Create for "multinode-898977" (driver="docker")
	I1107 23:55:06.300577 1520543 client.go:168] LocalClient.Create starting
	I1107 23:55:06.300641 1520543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem
	I1107 23:55:06.300681 1520543 main.go:141] libmachine: Decoding PEM data...
	I1107 23:55:06.300700 1520543 main.go:141] libmachine: Parsing certificate...
	I1107 23:55:06.300773 1520543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem
	I1107 23:55:06.300796 1520543 main.go:141] libmachine: Decoding PEM data...
	I1107 23:55:06.300811 1520543 main.go:141] libmachine: Parsing certificate...
	I1107 23:55:06.301049 1520543 cli_runner.go:164] Run: docker network inspect multinode-898977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:55:06.319673 1520543 network_create.go:77] Found existing network {name:multinode-898977 subnet:0x400345ce40 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1107 23:55:06.319773 1520543 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-898977-m02" container
	I1107 23:55:06.319858 1520543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:55:06.337238 1520543 cli_runner.go:164] Run: docker volume create multinode-898977-m02 --label name.minikube.sigs.k8s.io=multinode-898977-m02 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:55:06.355806 1520543 oci.go:103] Successfully created a docker volume multinode-898977-m02
	I1107 23:55:06.355892 1520543 cli_runner.go:164] Run: docker run --rm --name multinode-898977-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-898977-m02 --entrypoint /usr/bin/test -v multinode-898977-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:55:06.963238 1520543 oci.go:107] Successfully prepared a docker volume multinode-898977-m02
	I1107 23:55:06.963279 1520543 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:55:06.963302 1520543 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:55:06.963392 1520543 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-898977-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:55:11.458117 1520543 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-898977-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.494668785s)
	I1107 23:55:11.458148 1520543 kic.go:203] duration metric: took 4.494844 seconds to extract preloaded images to volume
	W1107 23:55:11.458294 1520543 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:55:11.458408 1520543 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:55:11.527493 1520543 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-898977-m02 --name multinode-898977-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-898977-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-898977-m02 --network multinode-898977 --ip 192.168.58.3 --volume multinode-898977-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:55:11.908812 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977-m02 --format={{.State.Running}}
	I1107 23:55:11.934573 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977-m02 --format={{.State.Status}}
	I1107 23:55:11.958675 1520543 cli_runner.go:164] Run: docker exec multinode-898977-m02 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:55:12.065249 1520543 oci.go:144] the created container "multinode-898977-m02" has a running status.
	I1107 23:55:12.065291 1520543 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa...
	I1107 23:55:12.731081 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:55:12.738225 1520543 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:55:12.787484 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977-m02 --format={{.State.Status}}
	I1107 23:55:12.813627 1520543 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:55:12.813658 1520543 kic_runner.go:114] Args: [docker exec --privileged multinode-898977-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:55:12.906599 1520543 cli_runner.go:164] Run: docker container inspect multinode-898977-m02 --format={{.State.Status}}
	I1107 23:55:12.935507 1520543 machine.go:88] provisioning docker machine ...
	I1107 23:55:12.935541 1520543 ubuntu.go:169] provisioning hostname "multinode-898977-m02"
	I1107 23:55:12.935616 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:12.982308 1520543 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:12.982720 1520543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1107 23:55:12.982744 1520543 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-898977-m02 && echo "multinode-898977-m02" | sudo tee /etc/hostname
	I1107 23:55:13.186055 1520543 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-898977-m02
	
	I1107 23:55:13.186141 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:13.212346 1520543 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:13.212778 1520543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1107 23:55:13.212800 1520543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-898977-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-898977-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-898977-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:55:13.355263 1520543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:55:13.355330 1520543 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1107 23:55:13.355364 1520543 ubuntu.go:177] setting up certificates
	I1107 23:55:13.355385 1520543 provision.go:83] configureAuth start
	I1107 23:55:13.355459 1520543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977-m02
	I1107 23:55:13.375695 1520543 provision.go:138] copyHostCerts
	I1107 23:55:13.375736 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1107 23:55:13.375769 1520543 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1107 23:55:13.375775 1520543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1107 23:55:13.375852 1520543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1107 23:55:13.375927 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1107 23:55:13.375944 1520543 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1107 23:55:13.375948 1520543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1107 23:55:13.375977 1520543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1107 23:55:13.376016 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1107 23:55:13.376030 1520543 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1107 23:55:13.376034 1520543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1107 23:55:13.376058 1520543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1107 23:55:13.376098 1520543 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.multinode-898977-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-898977-m02]
	I1107 23:55:13.639599 1520543 provision.go:172] copyRemoteCerts
	I1107 23:55:13.639672 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:55:13.639714 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:13.659488 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa Username:docker}
	I1107 23:55:13.753057 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:55:13.753118 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 23:55:13.784044 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:55:13.784108 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 23:55:13.815560 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:55:13.815633 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:55:13.846260 1520543 provision.go:86] duration metric: configureAuth took 490.849326ms
	I1107 23:55:13.846324 1520543 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:55:13.846558 1520543 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:55:13.846701 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:13.865728 1520543 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:13.866193 1520543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1107 23:55:13.866214 1520543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:55:14.125665 1520543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:55:14.125713 1520543 machine.go:91] provisioned docker machine in 1.19018272s
	I1107 23:55:14.125724 1520543 client.go:171] LocalClient.Create took 7.825139559s
	I1107 23:55:14.125741 1520543 start.go:167] duration metric: libmachine.API.Create for "multinode-898977" took 7.825192351s
	I1107 23:55:14.125759 1520543 start.go:300] post-start starting for "multinode-898977-m02" (driver="docker")
	I1107 23:55:14.125773 1520543 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:55:14.125850 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:55:14.125896 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:14.149747 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa Username:docker}
	I1107 23:55:14.249391 1520543 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:55:14.253809 1520543 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1107 23:55:14.253830 1520543 command_runner.go:130] > NAME="Ubuntu"
	I1107 23:55:14.253838 1520543 command_runner.go:130] > VERSION_ID="22.04"
	I1107 23:55:14.253845 1520543 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1107 23:55:14.253851 1520543 command_runner.go:130] > VERSION_CODENAME=jammy
	I1107 23:55:14.253885 1520543 command_runner.go:130] > ID=ubuntu
	I1107 23:55:14.253898 1520543 command_runner.go:130] > ID_LIKE=debian
	I1107 23:55:14.253904 1520543 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 23:55:14.253910 1520543 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 23:55:14.253923 1520543 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 23:55:14.253932 1520543 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 23:55:14.253957 1520543 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1107 23:55:14.254053 1520543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:55:14.254103 1520543 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:55:14.254121 1520543 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:55:14.254143 1520543 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:55:14.254170 1520543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1107 23:55:14.254249 1520543 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1107 23:55:14.254342 1520543 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1107 23:55:14.254358 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> /etc/ssl/certs/14550192.pem
	I1107 23:55:14.254471 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:55:14.265447 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1107 23:55:14.295501 1520543 start.go:303] post-start completed in 169.722857ms
	I1107 23:55:14.295941 1520543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977-m02
	I1107 23:55:14.320179 1520543 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/config.json ...
	I1107 23:55:14.320465 1520543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:55:14.320516 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:14.340419 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa Username:docker}
	I1107 23:55:14.432061 1520543 command_runner.go:130] > 17%!
	(MISSING)I1107 23:55:14.432134 1520543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:55:14.437489 1520543 command_runner.go:130] > 162G
	I1107 23:55:14.437750 1520543 start.go:128] duration metric: createHost completed in 8.14216774s
	I1107 23:55:14.437766 1520543 start.go:83] releasing machines lock for "multinode-898977-m02", held for 8.142309187s
	I1107 23:55:14.437839 1520543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977-m02
	I1107 23:55:14.462709 1520543 out.go:177] * Found network options:
	I1107 23:55:14.464591 1520543 out.go:177]   - NO_PROXY=192.168.58.2
	W1107 23:55:14.466213 1520543 proxy.go:119] fail to check proxy env: Error ip not in block
	W1107 23:55:14.466251 1520543 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:55:14.466328 1520543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:55:14.466373 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:14.466631 1520543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:55:14.466684 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:55:14.486186 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa Username:docker}
	I1107 23:55:14.503531 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa Username:docker}
	I1107 23:55:14.731853 1520543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:55:14.809241 1520543 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:55:14.809277 1520543 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1107 23:55:14.809286 1520543 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1107 23:55:14.809296 1520543 command_runner.go:130] > Device: b3h/179d	Inode: 5189945     Links: 1
	I1107 23:55:14.809303 1520543 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:55:14.809311 1520543 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:55:14.809317 1520543 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:55:14.809328 1520543 command_runner.go:130] > Change: 2023-11-07 23:30:04.327579928 +0000
	I1107 23:55:14.809335 1520543 command_runner.go:130] >  Birth: 2023-11-07 23:30:04.327579928 +0000
	I1107 23:55:14.809424 1520543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:55:14.834106 1520543 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:55:14.834246 1520543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:55:14.876915 1520543 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1107 23:55:14.876993 1520543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:55:14.877015 1520543 start.go:472] detecting cgroup driver to use...
	I1107 23:55:14.877071 1520543 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:55:14.877144 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:55:14.899305 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:55:14.913336 1520543 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:55:14.913400 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:55:14.930073 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:55:14.946891 1520543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:55:15.063982 1520543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:55:15.179691 1520543 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:55:15.180170 1520543 docker.go:219] disabling docker service ...
	I1107 23:55:15.180270 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:55:15.206762 1520543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:55:15.222771 1520543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:55:15.333673 1520543 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:55:15.333833 1520543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:55:15.441482 1520543 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:55:15.441558 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:55:15.456659 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:55:15.476155 1520543 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:55:15.477621 1520543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:55:15.477687 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:15.492079 1520543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:55:15.492152 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:15.504678 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:15.516616 1520543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:15.530198 1520543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:55:15.542331 1520543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:55:15.552267 1520543 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1107 23:55:15.553601 1520543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:55:15.564654 1520543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:55:15.682724 1520543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:55:15.823847 1520543 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:55:15.823951 1520543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:55:15.828977 1520543 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:55:15.829050 1520543 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:55:15.829072 1520543 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1107 23:55:15.829092 1520543 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:55:15.829127 1520543 command_runner.go:130] > Access: 2023-11-07 23:55:15.806474308 +0000
	I1107 23:55:15.829153 1520543 command_runner.go:130] > Modify: 2023-11-07 23:55:15.806474308 +0000
	I1107 23:55:15.829175 1520543 command_runner.go:130] > Change: 2023-11-07 23:55:15.806474308 +0000
	I1107 23:55:15.829208 1520543 command_runner.go:130] >  Birth: -
	I1107 23:55:15.829247 1520543 start.go:540] Will wait 60s for crictl version
	I1107 23:55:15.829326 1520543 ssh_runner.go:195] Run: which crictl
	I1107 23:55:15.833784 1520543 command_runner.go:130] > /usr/bin/crictl
	I1107 23:55:15.834298 1520543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:55:15.877539 1520543 command_runner.go:130] > Version:  0.1.0
	I1107 23:55:15.877609 1520543 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:55:15.877630 1520543 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1107 23:55:15.877652 1520543 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:55:15.880162 1520543 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:55:15.880304 1520543 ssh_runner.go:195] Run: crio --version
	I1107 23:55:15.928279 1520543 command_runner.go:130] > crio version 1.24.6
	I1107 23:55:15.928362 1520543 command_runner.go:130] > Version:          1.24.6
	I1107 23:55:15.928391 1520543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:55:15.928441 1520543 command_runner.go:130] > GitTreeState:     clean
	I1107 23:55:15.928486 1520543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:55:15.928522 1520543 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:55:15.928547 1520543 command_runner.go:130] > Compiler:         gc
	I1107 23:55:15.928569 1520543 command_runner.go:130] > Platform:         linux/arm64
	I1107 23:55:15.928603 1520543 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:55:15.928631 1520543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:55:15.928651 1520543 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:55:15.928689 1520543 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:55:15.928852 1520543 ssh_runner.go:195] Run: crio --version
	I1107 23:55:15.972351 1520543 command_runner.go:130] > crio version 1.24.6
	I1107 23:55:15.972421 1520543 command_runner.go:130] > Version:          1.24.6
	I1107 23:55:15.972443 1520543 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:55:15.972461 1520543 command_runner.go:130] > GitTreeState:     clean
	I1107 23:55:15.972496 1520543 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:55:15.972520 1520543 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:55:15.972541 1520543 command_runner.go:130] > Compiler:         gc
	I1107 23:55:15.972574 1520543 command_runner.go:130] > Platform:         linux/arm64
	I1107 23:55:15.972602 1520543 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:55:15.972625 1520543 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:55:15.972660 1520543 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:55:15.972685 1520543 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:55:15.976981 1520543 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1107 23:55:15.978568 1520543 out.go:177]   - env NO_PROXY=192.168.58.2
	I1107 23:55:15.980299 1520543 cli_runner.go:164] Run: docker network inspect multinode-898977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:55:16.007816 1520543 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1107 23:55:16.012709 1520543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:55:16.027051 1520543 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977 for IP: 192.168.58.3
	I1107 23:55:16.027088 1520543 certs.go:190] acquiring lock for shared ca certs: {Name:mk4f8465cbc85ba57ebf3be6025d59928913c61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:55:16.027237 1520543 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key
	I1107 23:55:16.027288 1520543 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key
	I1107 23:55:16.027302 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:55:16.027320 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:55:16.027335 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:55:16.027355 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:55:16.027420 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem (1338 bytes)
	W1107 23:55:16.027456 1520543 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019_empty.pem, impossibly tiny 0 bytes
	I1107 23:55:16.027469 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:55:16.027495 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem (1082 bytes)
	I1107 23:55:16.027523 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:55:16.027552 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem (1675 bytes)
	I1107 23:55:16.027604 1520543 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem (1708 bytes)
	I1107 23:55:16.027637 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem -> /usr/share/ca-certificates/1455019.pem
	I1107 23:55:16.027654 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> /usr/share/ca-certificates/14550192.pem
	I1107 23:55:16.027668 1520543 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:16.028016 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:55:16.058668 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:55:16.091150 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:55:16.120033 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:55:16.148886 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/1455019.pem --> /usr/share/ca-certificates/1455019.pem (1338 bytes)
	I1107 23:55:16.179477 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /usr/share/ca-certificates/14550192.pem (1708 bytes)
	I1107 23:55:16.210581 1520543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:55:16.238908 1520543 ssh_runner.go:195] Run: openssl version
	I1107 23:55:16.245836 1520543 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1107 23:55:16.245944 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14550192.pem && ln -fs /usr/share/ca-certificates/14550192.pem /etc/ssl/certs/14550192.pem"
	I1107 23:55:16.257933 1520543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14550192.pem
	I1107 23:55:16.262609 1520543 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:38 /usr/share/ca-certificates/14550192.pem
	I1107 23:55:16.263079 1520543 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:38 /usr/share/ca-certificates/14550192.pem
	I1107 23:55:16.263137 1520543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14550192.pem
	I1107 23:55:16.271421 1520543 command_runner.go:130] > 3ec20f2e
	I1107 23:55:16.271850 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14550192.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:55:16.283650 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:55:16.295608 1520543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:16.300353 1520543 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:30 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:16.300719 1520543 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:30 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:16.300787 1520543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:16.309484 1520543 command_runner.go:130] > b5213941
	I1107 23:55:16.309842 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:55:16.321836 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455019.pem && ln -fs /usr/share/ca-certificates/1455019.pem /etc/ssl/certs/1455019.pem"
	I1107 23:55:16.333788 1520543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455019.pem
	I1107 23:55:16.338686 1520543 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:38 /usr/share/ca-certificates/1455019.pem
	I1107 23:55:16.338737 1520543 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:38 /usr/share/ca-certificates/1455019.pem
	I1107 23:55:16.338797 1520543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455019.pem
	I1107 23:55:16.347342 1520543 command_runner.go:130] > 51391683
	I1107 23:55:16.347837 1520543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455019.pem /etc/ssl/certs/51391683.0"
	I1107 23:55:16.360328 1520543 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:55:16.364774 1520543 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:55:16.364932 1520543 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:55:16.365059 1520543 ssh_runner.go:195] Run: crio config
	I1107 23:55:16.420768 1520543 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:55:16.420837 1520543 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:55:16.420870 1520543 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:55:16.420890 1520543 command_runner.go:130] > #
	I1107 23:55:16.420915 1520543 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:55:16.420939 1520543 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:55:16.420961 1520543 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:55:16.420983 1520543 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:55:16.421015 1520543 command_runner.go:130] > # reload'.
	I1107 23:55:16.421039 1520543 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:55:16.421062 1520543 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:55:16.421083 1520543 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:55:16.421104 1520543 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:55:16.421124 1520543 command_runner.go:130] > [crio]
	I1107 23:55:16.421145 1520543 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:55:16.421166 1520543 command_runner.go:130] > # containers images, in this directory.
	I1107 23:55:16.421616 1520543 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1107 23:55:16.421658 1520543 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:55:16.421680 1520543 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1107 23:55:16.421702 1520543 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:55:16.421725 1520543 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:55:16.421790 1520543 command_runner.go:130] > # storage_driver = "vfs"
	I1107 23:55:16.421816 1520543 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:55:16.421837 1520543 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:55:16.421860 1520543 command_runner.go:130] > # storage_option = [
	I1107 23:55:16.421879 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.421901 1520543 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:55:16.421934 1520543 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:55:16.422150 1520543 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:55:16.422189 1520543 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:55:16.422221 1520543 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:55:16.422242 1520543 command_runner.go:130] > # always happen on a node reboot
	I1107 23:55:16.422753 1520543 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:55:16.422809 1520543 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:55:16.422832 1520543 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:55:16.422881 1520543 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:55:16.422910 1520543 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:55:16.422933 1520543 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:55:16.422959 1520543 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:55:16.423003 1520543 command_runner.go:130] > # internal_wipe = true
	I1107 23:55:16.423025 1520543 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:55:16.423047 1520543 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:55:16.423086 1520543 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:55:16.423111 1520543 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:55:16.423133 1520543 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:55:16.423161 1520543 command_runner.go:130] > [crio.api]
	I1107 23:55:16.423185 1520543 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:55:16.423378 1520543 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:55:16.423414 1520543 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:55:16.423466 1520543 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:55:16.423491 1520543 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:55:16.423511 1520543 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:55:16.423538 1520543 command_runner.go:130] > # stream_port = "0"
	I1107 23:55:16.423561 1520543 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:55:16.423581 1520543 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:55:16.423603 1520543 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:55:16.423846 1520543 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:55:16.423900 1520543 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:55:16.423924 1520543 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:55:16.423961 1520543 command_runner.go:130] > # minutes.
	I1107 23:55:16.424011 1520543 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:55:16.424037 1520543 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:55:16.424059 1520543 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:55:16.424080 1520543 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:55:16.424101 1520543 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:55:16.424128 1520543 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:55:16.424155 1520543 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:55:16.424176 1520543 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:55:16.424200 1520543 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:55:16.424427 1520543 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1107 23:55:16.424471 1520543 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:55:16.424495 1520543 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1107 23:55:16.424538 1520543 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:55:16.424566 1520543 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:55:16.424584 1520543 command_runner.go:130] > [crio.runtime]
	I1107 23:55:16.424613 1520543 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:55:16.424636 1520543 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:55:16.424656 1520543 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:55:16.424677 1520543 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:55:16.424697 1520543 command_runner.go:130] > # default_ulimits = [
	I1107 23:55:16.424715 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.424746 1520543 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:55:16.424978 1520543 command_runner.go:130] > # no_pivot = false
	I1107 23:55:16.425076 1520543 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:55:16.425118 1520543 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:55:16.425172 1520543 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:55:16.425205 1520543 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:55:16.425229 1520543 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:55:16.425252 1520543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:55:16.425283 1520543 command_runner.go:130] > # conmon = ""
	I1107 23:55:16.425304 1520543 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:55:16.425335 1520543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:55:16.425358 1520543 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:55:16.425381 1520543 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:55:16.425402 1520543 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:55:16.425427 1520543 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:55:16.425447 1520543 command_runner.go:130] > # conmon_env = [
	I1107 23:55:16.425644 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.425679 1520543 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:55:16.425700 1520543 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:55:16.425736 1520543 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:55:16.426052 1520543 command_runner.go:130] > # default_env = [
	I1107 23:55:16.426217 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.426294 1520543 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:55:16.426377 1520543 command_runner.go:130] > # selinux = false
	I1107 23:55:16.426418 1520543 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:55:16.426440 1520543 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:55:16.426462 1520543 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:55:16.426667 1520543 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:55:16.426728 1520543 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:55:16.426785 1520543 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:55:16.426818 1520543 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:55:16.426845 1520543 command_runner.go:130] > # which might increase security.
	I1107 23:55:16.426866 1520543 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1107 23:55:16.426890 1520543 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:55:16.426914 1520543 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:55:16.426937 1520543 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:55:16.426959 1520543 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:55:16.426979 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:55:16.427041 1520543 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:55:16.427078 1520543 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:55:16.427102 1520543 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:55:16.427122 1520543 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:55:16.427169 1520543 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:55:16.427194 1520543 command_runner.go:130] > # irqbalance daemon.
	I1107 23:55:16.427215 1520543 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:55:16.427239 1520543 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:55:16.427260 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:55:16.427286 1520543 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:55:16.427309 1520543 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:55:16.427534 1520543 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:55:16.427572 1520543 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:55:16.427651 1520543 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:55:16.427687 1520543 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:55:16.427721 1520543 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:55:16.427743 1520543 command_runner.go:130] > # will be added.
	I1107 23:55:16.427774 1520543 command_runner.go:130] > # default_capabilities = [
	I1107 23:55:16.427979 1520543 command_runner.go:130] > # 	"CHOWN",
	I1107 23:55:16.428040 1520543 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:55:16.428105 1520543 command_runner.go:130] > # 	"FSETID",
	I1107 23:55:16.428126 1520543 command_runner.go:130] > # 	"FOWNER",
	I1107 23:55:16.428163 1520543 command_runner.go:130] > # 	"SETGID",
	I1107 23:55:16.428207 1520543 command_runner.go:130] > # 	"SETUID",
	I1107 23:55:16.428240 1520543 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:55:16.428261 1520543 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:55:16.428287 1520543 command_runner.go:130] > # 	"KILL",
	I1107 23:55:16.428320 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.428346 1520543 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1107 23:55:16.428377 1520543 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1107 23:55:16.428622 1520543 command_runner.go:130] > # add_inheritable_capabilities = true
	I1107 23:55:16.428657 1520543 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:55:16.428666 1520543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:55:16.428828 1520543 command_runner.go:130] > # default_sysctls = [
	I1107 23:55:16.428838 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.428860 1520543 command_runner.go:130] > # List of devices on the host that a
	I1107 23:55:16.428869 1520543 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:55:16.429108 1520543 command_runner.go:130] > # allowed_devices = [
	I1107 23:55:16.429118 1520543 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:55:16.429122 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.429129 1520543 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:55:16.429170 1520543 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:55:16.429178 1520543 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:55:16.429186 1520543 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:55:16.429191 1520543 command_runner.go:130] > # additional_devices = [
	I1107 23:55:16.429195 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.429201 1520543 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:55:16.429206 1520543 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:55:16.429211 1520543 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:55:16.429224 1520543 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:55:16.429229 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.429237 1520543 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:55:16.429245 1520543 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:55:16.429250 1520543 command_runner.go:130] > # Defaults to false.
	I1107 23:55:16.429256 1520543 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:55:16.429264 1520543 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:55:16.429272 1520543 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:55:16.429276 1520543 command_runner.go:130] > # hooks_dir = [
	I1107 23:55:16.429282 1520543 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:55:16.429294 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.429302 1520543 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:55:16.429310 1520543 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:55:16.429317 1520543 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:55:16.429321 1520543 command_runner.go:130] > #
	I1107 23:55:16.429329 1520543 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:55:16.429337 1520543 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:55:16.429344 1520543 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:55:16.429348 1520543 command_runner.go:130] > #
	I1107 23:55:16.429356 1520543 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:55:16.429371 1520543 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:55:16.429380 1520543 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:55:16.429386 1520543 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:55:16.429390 1520543 command_runner.go:130] > #
	I1107 23:55:16.429395 1520543 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:55:16.429403 1520543 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:55:16.429411 1520543 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:55:16.429416 1520543 command_runner.go:130] > # pids_limit = 0
	I1107 23:55:16.429424 1520543 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:55:16.429437 1520543 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:55:16.429456 1520543 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:55:16.429466 1520543 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:55:16.429471 1520543 command_runner.go:130] > # log_size_max = -1
	I1107 23:55:16.429479 1520543 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:55:16.429484 1520543 command_runner.go:130] > # log_to_journald = false
	I1107 23:55:16.429503 1520543 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:55:16.429720 1520543 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:55:16.429737 1520543 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:55:16.429744 1520543 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:55:16.429779 1520543 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:55:16.429785 1520543 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:55:16.429797 1520543 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:55:16.429803 1520543 command_runner.go:130] > # read_only = false
	I1107 23:55:16.429814 1520543 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:55:16.429823 1520543 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:55:16.429845 1520543 command_runner.go:130] > # live configuration reload.
	I1107 23:55:16.430155 1520543 command_runner.go:130] > # log_level = "info"
	I1107 23:55:16.430185 1520543 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:55:16.430192 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:55:16.430208 1520543 command_runner.go:130] > # log_filter = ""
	I1107 23:55:16.430225 1520543 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:55:16.430236 1520543 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:55:16.430260 1520543 command_runner.go:130] > # separated by comma.
	I1107 23:55:16.430275 1520543 command_runner.go:130] > # uid_mappings = ""
	I1107 23:55:16.430292 1520543 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:55:16.430305 1520543 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:55:16.430311 1520543 command_runner.go:130] > # separated by comma.
	I1107 23:55:16.430319 1520543 command_runner.go:130] > # gid_mappings = ""
	I1107 23:55:16.430339 1520543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:55:16.430352 1520543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:55:16.430370 1520543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:55:16.430381 1520543 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:55:16.430389 1520543 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:55:16.430412 1520543 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:55:16.430425 1520543 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:55:16.430446 1520543 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:55:16.430462 1520543 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:55:16.430486 1520543 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:55:16.430502 1520543 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:55:16.430518 1520543 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:55:16.430534 1520543 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:55:16.430543 1520543 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:55:16.430564 1520543 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:55:16.430578 1520543 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:55:16.430592 1520543 command_runner.go:130] > # drop_infra_ctr = true
	I1107 23:55:16.430606 1520543 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:55:16.430615 1520543 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:55:16.430641 1520543 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:55:16.430652 1520543 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:55:16.430660 1520543 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:55:16.430675 1520543 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:55:16.430686 1520543 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:55:16.430695 1520543 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:55:16.430715 1520543 command_runner.go:130] > # pinns_path = ""
	I1107 23:55:16.430731 1520543 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:55:16.430750 1520543 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:55:16.430766 1520543 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:55:16.430771 1520543 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:55:16.430794 1520543 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:55:16.430812 1520543 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:55:16.430834 1520543 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:55:16.430848 1520543 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:55:16.430871 1520543 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:55:16.430883 1520543 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:55:16.430898 1520543 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:55:16.430912 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.430921 1520543 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:55:16.430945 1520543 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:55:16.430959 1520543 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:55:16.430978 1520543 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:55:16.430989 1520543 command_runner.go:130] > #
	I1107 23:55:16.430995 1520543 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:55:16.431005 1520543 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:55:16.431022 1520543 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:55:16.431034 1520543 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:55:16.431040 1520543 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:55:16.431061 1520543 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:55:16.431067 1520543 command_runner.go:130] > # Where:
	I1107 23:55:16.431077 1520543 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:55:16.431100 1520543 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:55:16.431157 1520543 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:55:16.431183 1520543 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:55:16.431195 1520543 command_runner.go:130] > #   in $PATH.
	I1107 23:55:16.431203 1520543 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:55:16.431213 1520543 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:55:16.431221 1520543 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:55:16.431229 1520543 command_runner.go:130] > #   state.
	I1107 23:55:16.431237 1520543 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:55:16.431259 1520543 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:55:16.431273 1520543 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:55:16.431280 1520543 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:55:16.431293 1520543 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:55:16.431301 1520543 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:55:16.431310 1520543 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:55:16.431333 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:55:16.431350 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:55:16.431368 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:55:16.431383 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:55:16.431393 1520543 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:55:16.431420 1520543 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:55:16.431442 1520543 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:55:16.431452 1520543 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:55:16.431461 1520543 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:55:16.431467 1520543 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:55:16.431496 1520543 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1107 23:55:16.431508 1520543 command_runner.go:130] > runtime_type = "oci"
	I1107 23:55:16.431513 1520543 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:55:16.431522 1520543 command_runner.go:130] > runtime_config_path = ""
	I1107 23:55:16.431527 1520543 command_runner.go:130] > monitor_path = ""
	I1107 23:55:16.431532 1520543 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:55:16.431540 1520543 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:55:16.431571 1520543 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:55:16.431583 1520543 command_runner.go:130] > # running containers
	I1107 23:55:16.431600 1520543 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:55:16.431615 1520543 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:55:16.431636 1520543 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:55:16.431652 1520543 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:55:16.431671 1520543 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:55:16.431685 1520543 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:55:16.431692 1520543 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:55:16.431697 1520543 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:55:16.431719 1520543 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:55:16.431738 1520543 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:55:16.431747 1520543 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:55:16.431757 1520543 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:55:16.431765 1520543 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:55:16.431777 1520543 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:55:16.431804 1520543 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:55:16.431823 1520543 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:55:16.431840 1520543 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:55:16.431942 1520543 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:55:16.431961 1520543 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:55:16.431970 1520543 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:55:16.431984 1520543 command_runner.go:130] > # Example:
	I1107 23:55:16.432000 1520543 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:55:16.432021 1520543 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:55:16.432034 1520543 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:55:16.432049 1520543 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:55:16.432061 1520543 command_runner.go:130] > # cpuset = 0
	I1107 23:55:16.432066 1520543 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:55:16.432073 1520543 command_runner.go:130] > # Where:
	I1107 23:55:16.432099 1520543 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:55:16.432116 1520543 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:55:16.432127 1520543 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:55:16.432136 1520543 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:55:16.432149 1520543 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:55:16.432171 1520543 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:55:16.432184 1520543 command_runner.go:130] > # 
	I1107 23:55:16.432282 1520543 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:55:16.432295 1520543 command_runner.go:130] > #
	I1107 23:55:16.432303 1520543 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:55:16.432321 1520543 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:55:16.432334 1520543 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:55:16.432355 1520543 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:55:16.432374 1520543 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:55:16.432387 1520543 command_runner.go:130] > [crio.image]
	I1107 23:55:16.432403 1520543 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:55:16.432410 1520543 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:55:16.432434 1520543 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:55:16.432450 1520543 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:55:16.432467 1520543 command_runner.go:130] > # global_auth_file = ""
	I1107 23:55:16.432479 1520543 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:55:16.432486 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:55:16.432507 1520543 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:55:16.432522 1520543 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:55:16.432540 1520543 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:55:16.432556 1520543 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:55:16.432565 1520543 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:55:16.432584 1520543 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:55:16.432598 1520543 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:55:16.432619 1520543 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:55:16.432635 1520543 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:55:16.432641 1520543 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:55:16.432666 1520543 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:55:16.432682 1520543 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:55:16.432701 1520543 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:55:16.432716 1520543 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:55:16.432723 1520543 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:55:16.432759 1520543 command_runner.go:130] > # signature_policy = ""
	I1107 23:55:16.432772 1520543 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:55:16.432780 1520543 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:55:16.432788 1520543 command_runner.go:130] > # changing them here.
	I1107 23:55:16.432794 1520543 command_runner.go:130] > # insecure_registries = [
	I1107 23:55:16.432800 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.432809 1520543 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:55:16.432839 1520543 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:55:16.432851 1520543 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:55:16.432858 1520543 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:55:16.432866 1520543 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:55:16.432874 1520543 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:55:16.432882 1520543 command_runner.go:130] > # CNI plugins.
	I1107 23:55:16.432887 1520543 command_runner.go:130] > [crio.network]
	I1107 23:55:16.432907 1520543 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:55:16.432921 1520543 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:55:16.432936 1520543 command_runner.go:130] > # cni_default_network = ""
	I1107 23:55:16.432950 1520543 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:55:16.432956 1520543 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:55:16.432979 1520543 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:55:16.432991 1520543 command_runner.go:130] > # plugin_dirs = [
	I1107 23:55:16.432996 1520543 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:55:16.433015 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.433024 1520543 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:55:16.433036 1520543 command_runner.go:130] > [crio.metrics]
	I1107 23:55:16.433055 1520543 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:55:16.433068 1520543 command_runner.go:130] > # enable_metrics = false
	I1107 23:55:16.433083 1520543 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:55:16.433096 1520543 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:55:16.433104 1520543 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:55:16.433116 1520543 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:55:16.433135 1520543 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:55:16.433148 1520543 command_runner.go:130] > # metrics_collectors = [
	I1107 23:55:16.433161 1520543 command_runner.go:130] > # 	"operations",
	I1107 23:55:16.433174 1520543 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:55:16.433180 1520543 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:55:16.433189 1520543 command_runner.go:130] > # 	"operations_errors",
	I1107 23:55:16.433194 1520543 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:55:16.433217 1520543 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:55:16.433231 1520543 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:55:16.433243 1520543 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:55:16.433249 1520543 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:55:16.433257 1520543 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:55:16.433263 1520543 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:55:16.433283 1520543 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:55:16.433295 1520543 command_runner.go:130] > # 	"containers_oom",
	I1107 23:55:16.433308 1520543 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:55:16.433320 1520543 command_runner.go:130] > # 	"operations_total",
	I1107 23:55:16.433326 1520543 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:55:16.433336 1520543 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:55:16.433341 1520543 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:55:16.433653 1520543 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:55:16.433672 1520543 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:55:16.433678 1520543 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:55:16.433684 1520543 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:55:16.433701 1520543 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:55:16.433721 1520543 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:55:16.433726 1520543 command_runner.go:130] > # ]
	I1107 23:55:16.433736 1520543 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:55:16.433741 1520543 command_runner.go:130] > # metrics_port = 9090
	I1107 23:55:16.433758 1520543 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:55:16.433766 1520543 command_runner.go:130] > # metrics_socket = ""
	I1107 23:55:16.433794 1520543 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:55:16.433808 1520543 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:55:16.433817 1520543 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:55:16.433826 1520543 command_runner.go:130] > # certificate on any modification event.
	I1107 23:55:16.433832 1520543 command_runner.go:130] > # metrics_cert = ""
	I1107 23:55:16.433841 1520543 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:55:16.433848 1520543 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:55:16.433878 1520543 command_runner.go:130] > # metrics_key = ""
	I1107 23:55:16.433892 1520543 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:55:16.433898 1520543 command_runner.go:130] > [crio.tracing]
	I1107 23:55:16.433907 1520543 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:55:16.433913 1520543 command_runner.go:130] > # enable_tracing = false
	I1107 23:55:16.433922 1520543 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:55:16.433941 1520543 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:55:16.433956 1520543 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:55:16.433971 1520543 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:55:16.434005 1520543 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:55:16.434017 1520543 command_runner.go:130] > [crio.stats]
	I1107 23:55:16.434029 1520543 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:55:16.434036 1520543 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:55:16.434056 1520543 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:55:16.436027 1520543 command_runner.go:130] ! time="2023-11-07 23:55:16.418141353Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1107 23:55:16.436055 1520543 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:55:16.436114 1520543 cni.go:84] Creating CNI manager for ""
	I1107 23:55:16.436124 1520543 cni.go:136] 2 nodes found, recommending kindnet
	I1107 23:55:16.436134 1520543 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:55:16.436156 1520543 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-898977 NodeName:multinode-898977-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:55:16.436283 1520543 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-898977-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:55:16.436337 1520543 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-898977-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:55:16.436403 1520543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:55:16.446284 1520543 command_runner.go:130] > kubeadm
	I1107 23:55:16.446305 1520543 command_runner.go:130] > kubectl
	I1107 23:55:16.446311 1520543 command_runner.go:130] > kubelet
	I1107 23:55:16.447545 1520543 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:55:16.447619 1520543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 23:55:16.458301 1520543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1107 23:55:16.480374 1520543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:55:16.508197 1520543 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:55:16.512686 1520543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:55:16.526266 1520543 host.go:66] Checking if "multinode-898977" exists ...
	I1107 23:55:16.526551 1520543 start.go:304] JoinCluster: &{Name:multinode-898977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-898977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:55:16.526646 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 23:55:16.526700 1520543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:55:16.526615 1520543 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:55:16.545646 1520543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:55:16.721508 1520543 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token kxwlp8.bu5w6wcqlmpmvinx --discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 
	I1107 23:55:16.721571 1520543 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:55:16.721603 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kxwlp8.bu5w6wcqlmpmvinx --discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-898977-m02"
	I1107 23:55:16.769726 1520543 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:55:16.814883 1520543 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:55:16.814911 1520543 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:55:16.814918 1520543 command_runner.go:130] > OS: Linux
	I1107 23:55:16.814924 1520543 command_runner.go:130] > CGROUPS_CPU: enabled
	I1107 23:55:16.814932 1520543 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1107 23:55:16.814938 1520543 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1107 23:55:16.814945 1520543 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1107 23:55:16.814952 1520543 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1107 23:55:16.814958 1520543 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1107 23:55:16.814965 1520543 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1107 23:55:16.814971 1520543 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1107 23:55:16.814990 1520543 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1107 23:55:16.923689 1520543 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 23:55:16.923713 1520543 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 23:55:16.954756 1520543 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:55:16.954780 1520543 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:55:16.954787 1520543 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:55:17.051568 1520543 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1107 23:55:20.068755 1520543 command_runner.go:130] > This node has joined the cluster:
	I1107 23:55:20.068780 1520543 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1107 23:55:20.068789 1520543 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1107 23:55:20.068797 1520543 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1107 23:55:20.072202 1520543 command_runner.go:130] ! W1107 23:55:16.769199    1021 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1107 23:55:20.072237 1520543 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:55:20.072253 1520543 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:55:20.072267 1520543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kxwlp8.bu5w6wcqlmpmvinx --discovery-token-ca-cert-hash sha256:c3941fef5698dd05ce3b8b0cf7c0007a859239b532241e9609b707f9560b2fa6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-898977-m02": (3.350649998s)
	I1107 23:55:20.072286 1520543 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1107 23:55:20.315567 1520543 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1107 23:55:20.315594 1520543 start.go:306] JoinCluster complete in 3.789044791s
	I1107 23:55:20.315605 1520543 cni.go:84] Creating CNI manager for ""
	I1107 23:55:20.315612 1520543 cni.go:136] 2 nodes found, recommending kindnet
	I1107 23:55:20.315663 1520543 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:55:20.320671 1520543 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:55:20.320696 1520543 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1107 23:55:20.320711 1520543 command_runner.go:130] > Device: 3ah/58d	Inode: 5193642     Links: 1
	I1107 23:55:20.320720 1520543 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:55:20.320734 1520543 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1107 23:55:20.320741 1520543 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1107 23:55:20.320747 1520543 command_runner.go:130] > Change: 2023-11-07 23:30:05.023574228 +0000
	I1107 23:55:20.320754 1520543 command_runner.go:130] >  Birth: 2023-11-07 23:30:04.971574654 +0000
	I1107 23:55:20.320791 1520543 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:55:20.320798 1520543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:55:20.342485 1520543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:55:20.643892 1520543 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:55:20.651047 1520543 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:55:20.658970 1520543 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 23:55:20.678353 1520543 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 23:55:20.684387 1520543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:55:20.684679 1520543 kapi.go:59] client config for multinode-898977: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:55:20.685037 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:55:20.685054 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:20.685064 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:20.685071 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:20.687661 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:20.687685 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:20.687693 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:20.687700 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:20.687706 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:20.687712 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:20.687719 1520543 round_trippers.go:580]     Content-Length: 291
	I1107 23:55:20.687726 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:20 GMT
	I1107 23:55:20.687732 1520543 round_trippers.go:580]     Audit-Id: bd84b83f-0f07-4c44-b813-44a6707d3890
	I1107 23:55:20.687761 1520543 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"be9e3fe2-6b1e-44da-90ed-3147e5fd8faf","resourceVersion":"416","creationTimestamp":"2023-11-07T23:54:18Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 23:55:20.687854 1520543 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-898977" context rescaled to 1 replicas
	I1107 23:55:20.687884 1520543 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:55:20.691773 1520543 out.go:177] * Verifying Kubernetes components...
	I1107 23:55:20.693702 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:55:20.732177 1520543 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:55:20.732504 1520543 kapi.go:59] client config for multinode-898977: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/multinode-898977/client.key", CAFile:"/home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:55:20.732815 1520543 node_ready.go:35] waiting up to 6m0s for node "multinode-898977-m02" to be "Ready" ...
	I1107 23:55:20.732907 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:20.732939 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:20.732966 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:20.732989 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:20.735670 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:20.735689 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:20.735697 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:20.735703 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:20 GMT
	I1107 23:55:20.735710 1520543 round_trippers.go:580]     Audit-Id: 7216b3c7-2fe4-4fe0-8bea-10b8fa445c88
	I1107 23:55:20.735716 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:20.735722 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:20.735728 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:20.736224 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"452","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1107 23:55:20.736624 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:20.736634 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:20.736642 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:20.736649 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:20.738968 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:20.738984 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:20.738992 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:20 GMT
	I1107 23:55:20.738998 1520543 round_trippers.go:580]     Audit-Id: 80876ce7-dd4e-412b-baf6-60bbdb0f098c
	I1107 23:55:20.739004 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:20.739015 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:20.739021 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:20.739027 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:20.739654 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"452","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1107 23:55:21.240650 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:21.240673 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:21.240683 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:21.240690 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:21.243152 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:21.243174 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:21.243182 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:21.243190 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:21 GMT
	I1107 23:55:21.243196 1520543 round_trippers.go:580]     Audit-Id: ecef9821-3462-4b51-a0da-25867a059893
	I1107 23:55:21.243202 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:21.243208 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:21.243215 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:21.243454 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"452","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1107 23:55:21.740493 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:21.740516 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:21.740527 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:21.740535 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:21.743000 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:21.743022 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:21.743031 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:21.743040 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:21.743046 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:21.743052 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:21 GMT
	I1107 23:55:21.743059 1520543 round_trippers.go:580]     Audit-Id: f5663215-e995-424f-9d75-84c08ddda76c
	I1107 23:55:21.743066 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:21.743403 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:22.240419 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:22.240443 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:22.240453 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:22.240460 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:22.242899 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:22.242924 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:22.242933 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:22.242940 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:22.242947 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:22.242953 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:22.242960 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:22 GMT
	I1107 23:55:22.242970 1520543 round_trippers.go:580]     Audit-Id: 2bfc21cc-b944-4462-8206-66b7125922c9
	I1107 23:55:22.243161 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:22.740226 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:22.740265 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:22.740275 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:22.740282 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:22.743108 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:22.743141 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:22.743157 1520543 round_trippers.go:580]     Audit-Id: 4bfeed3e-35ed-4381-b36a-b41c63e03dee
	I1107 23:55:22.743164 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:22.743170 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:22.743177 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:22.743183 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:22.743191 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:22 GMT
	I1107 23:55:22.743301 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:22.743679 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:23.240849 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:23.240874 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:23.240884 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:23.240892 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:23.243607 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:23.243635 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:23.243644 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:23.243650 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:23.243656 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:23.243663 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:23 GMT
	I1107 23:55:23.243669 1520543 round_trippers.go:580]     Audit-Id: d8bd78e5-714b-4db2-934d-0f428c408de7
	I1107 23:55:23.243675 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:23.243797 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:23.740945 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:23.740967 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:23.740977 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:23.740985 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:23.743502 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:23.743524 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:23.743533 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:23.743539 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:23.743545 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:23.743552 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:23 GMT
	I1107 23:55:23.743558 1520543 round_trippers.go:580]     Audit-Id: e041ae8f-4e10-44fe-851c-9134d211f9a6
	I1107 23:55:23.743565 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:23.743753 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:24.240832 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:24.240878 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:24.240889 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:24.240895 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:24.243428 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:24.243451 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:24.243459 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:24.243466 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:24.243473 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:24 GMT
	I1107 23:55:24.243479 1520543 round_trippers.go:580]     Audit-Id: 36cefc4b-7afb-41a0-ae13-bb5663a74c8b
	I1107 23:55:24.243485 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:24.243496 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:24.243861 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:24.740930 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:24.740952 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:24.740961 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:24.740968 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:24.743499 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:24.743537 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:24.743546 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:24 GMT
	I1107 23:55:24.743552 1520543 round_trippers.go:580]     Audit-Id: f5e85aca-31f8-4150-a2f7-db9d04420e7d
	I1107 23:55:24.743559 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:24.743565 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:24.743577 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:24.743583 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:24.743670 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:24.744027 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:25.240814 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:25.240839 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:25.240851 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:25.240858 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:25.243313 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:25.243339 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:25.243348 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:25.243355 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:25.243361 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:25 GMT
	I1107 23:55:25.243368 1520543 round_trippers.go:580]     Audit-Id: 5b7dec5a-b412-4d89-b01d-02c2d4369756
	I1107 23:55:25.243374 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:25.243380 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:25.243486 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:25.740429 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:25.740455 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:25.740466 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:25.740474 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:25.743153 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:25.743182 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:25.743191 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:25.743198 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:25.743205 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:25 GMT
	I1107 23:55:25.743211 1520543 round_trippers.go:580]     Audit-Id: 193bba43-5661-42ba-8ebf-c2be859e0a2e
	I1107 23:55:25.743217 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:25.743224 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:25.743437 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:26.241173 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:26.241202 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:26.241212 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:26.241219 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:26.243892 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:26.243916 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:26.243927 1520543 round_trippers.go:580]     Audit-Id: 9ebe3cb3-f640-46e9-abac-a39e47c459f8
	I1107 23:55:26.243933 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:26.243940 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:26.243946 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:26.243952 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:26.243959 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:26 GMT
	I1107 23:55:26.244080 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:26.740266 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:26.740292 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:26.740306 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:26.740314 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:26.742967 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:26.742992 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:26.743001 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:26.743008 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:26 GMT
	I1107 23:55:26.743015 1520543 round_trippers.go:580]     Audit-Id: 1b42fb35-e6e2-4491-a508-3d84014647c2
	I1107 23:55:26.743021 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:26.743027 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:26.743034 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:26.743134 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:27.241070 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:27.241096 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:27.241106 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:27.241112 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:27.248506 1520543 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1107 23:55:27.248527 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:27.248536 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:27.248543 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:27 GMT
	I1107 23:55:27.248550 1520543 round_trippers.go:580]     Audit-Id: d7fb5072-6263-4f04-a991-c98b735679e7
	I1107 23:55:27.248556 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:27.248562 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:27.248569 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:27.248677 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:27.249040 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:27.740227 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:27.740254 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:27.740265 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:27.740276 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:27.746749 1520543 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 23:55:27.746772 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:27.746781 1520543 round_trippers.go:580]     Audit-Id: 3da0cbf7-1940-4438-a2f9-80cf7e9f9e88
	I1107 23:55:27.746787 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:27.746797 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:27.746803 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:27.746810 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:27.746816 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:27 GMT
	I1107 23:55:27.747264 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:28.240543 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:28.240572 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:28.240583 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:28.240590 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:28.243445 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:28.243471 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:28.243479 1520543 round_trippers.go:580]     Audit-Id: 00f5e4be-2b25-4863-949d-1509ee7e9f4d
	I1107 23:55:28.243486 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:28.243492 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:28.243516 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:28.243527 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:28.243534 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:28 GMT
	I1107 23:55:28.243758 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:28.740199 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:28.740227 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:28.740237 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:28.740245 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:28.742790 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:28.742820 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:28.742829 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:28.742835 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:28.742842 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:28.742849 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:28.742859 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:28 GMT
	I1107 23:55:28.742876 1520543 round_trippers.go:580]     Audit-Id: f4147b7e-e7a1-4229-9ff7-0e68f0bc0277
	I1107 23:55:28.743192 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:29.240889 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:29.240917 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:29.240927 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:29.240937 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:29.243768 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:29.243795 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:29.243804 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:29.243810 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:29.243817 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:29.243823 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:29 GMT
	I1107 23:55:29.243829 1520543 round_trippers.go:580]     Audit-Id: 044a8fa2-2d47-4af8-acda-23ec281d0943
	I1107 23:55:29.243836 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:29.244107 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:29.740859 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:29.740887 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:29.740897 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:29.740908 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:29.743481 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:29.743507 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:29.743517 1520543 round_trippers.go:580]     Audit-Id: 253bcf94-88cc-405d-826a-4c9b626143cc
	I1107 23:55:29.743523 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:29.743529 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:29.743536 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:29.743542 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:29.743552 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:29 GMT
	I1107 23:55:29.743776 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"464","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1107 23:55:29.744144 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:30.240846 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:30.240871 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:30.240882 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:30.240889 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:30.243656 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:30.243678 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:30.243686 1520543 round_trippers.go:580]     Audit-Id: 1e46c495-0d82-4801-979b-f948a79f472c
	I1107 23:55:30.243693 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:30.243700 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:30.243706 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:30.243713 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:30.243720 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:30 GMT
	I1107 23:55:30.243850 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:30.741001 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:30.741025 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:30.741034 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:30.741041 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:30.743569 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:30.743596 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:30.743605 1520543 round_trippers.go:580]     Audit-Id: 50c9bff9-0189-49aa-b549-e14c4809df01
	I1107 23:55:30.743612 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:30.743618 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:30.743625 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:30.743631 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:30.743638 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:30 GMT
	I1107 23:55:30.743864 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:31.240933 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:31.240958 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:31.240968 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:31.240975 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:31.243428 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:31.243456 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:31.243464 1520543 round_trippers.go:580]     Audit-Id: 3d2a5fa8-4ba9-47f1-94ea-5ac4aedbc8bc
	I1107 23:55:31.243471 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:31.243477 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:31.243483 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:31.243491 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:31.243498 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:31 GMT
	I1107 23:55:31.243632 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:31.741096 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:31.741123 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:31.741135 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:31.741142 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:31.743767 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:31.743792 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:31.743801 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:31.743808 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:31.743814 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:31.743821 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:31.743827 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:31 GMT
	I1107 23:55:31.743838 1520543 round_trippers.go:580]     Audit-Id: ba1a1eba-1085-4830-8146-eb8cb770b18c
	I1107 23:55:31.743939 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:31.744315 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:32.241039 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:32.241062 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:32.241072 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:32.241079 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:32.243536 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:32.243556 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:32.243565 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:32.243571 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:32.243578 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:32.243585 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:32 GMT
	I1107 23:55:32.243591 1520543 round_trippers.go:580]     Audit-Id: 7e81586a-fffe-44af-ac37-bdca743bc10b
	I1107 23:55:32.243597 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:32.243789 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:32.740348 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:32.740372 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:32.740382 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:32.740389 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:32.742801 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:32.742822 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:32.742830 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:32.742836 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:32.742843 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:32 GMT
	I1107 23:55:32.742849 1520543 round_trippers.go:580]     Audit-Id: 2dd0fdca-b227-4a0f-a5e4-9d8142d674b6
	I1107 23:55:32.742855 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:32.742861 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:32.742951 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:33.241101 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:33.241147 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:33.241158 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:33.241170 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:33.243826 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:33.243845 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:33.243853 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:33.243860 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:33.243867 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:33.243873 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:33.243879 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:33 GMT
	I1107 23:55:33.243885 1520543 round_trippers.go:580]     Audit-Id: 760c4dd7-e072-491c-8395-d5ee42d98502
	I1107 23:55:33.244020 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:33.740716 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:33.740745 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:33.740755 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:33.740762 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:33.743344 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:33.743365 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:33.743374 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:33.743381 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:33.743387 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:33 GMT
	I1107 23:55:33.743393 1520543 round_trippers.go:580]     Audit-Id: 01ee63a8-0360-4e9e-b185-14345b6045bc
	I1107 23:55:33.743399 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:33.743405 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:33.743508 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:34.240597 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:34.240626 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:34.240637 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:34.240644 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:34.243134 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:34.243154 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:34.243163 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:34 GMT
	I1107 23:55:34.243170 1520543 round_trippers.go:580]     Audit-Id: 718803ce-56c5-41ab-bc69-4c7340a6d032
	I1107 23:55:34.243176 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:34.243182 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:34.243188 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:34.243194 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:34.243398 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:34.243772 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:34.740508 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:34.740533 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:34.740544 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:34.740553 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:34.743380 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:34.743406 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:34.743415 1520543 round_trippers.go:580]     Audit-Id: 312ddb5e-fed5-4fad-a940-15de2f008f42
	I1107 23:55:34.743422 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:34.743428 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:34.743434 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:34.743441 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:34.743447 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:34 GMT
	I1107 23:55:34.743742 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:35.240228 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:35.240256 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:35.240267 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:35.240273 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:35.242955 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:35.242981 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:35.242991 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:35.242998 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:35.243004 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:35.243010 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:35 GMT
	I1107 23:55:35.243016 1520543 round_trippers.go:580]     Audit-Id: b29d5bc8-52ce-461b-be12-200cc5eab405
	I1107 23:55:35.243028 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:35.243221 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:35.741202 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:35.741238 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:35.741248 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:35.741255 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:35.744305 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:35.744325 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:35.744334 1520543 round_trippers.go:580]     Audit-Id: 4f64491c-227d-4ad0-bbfe-227bec8ecc8d
	I1107 23:55:35.744340 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:35.744346 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:35.744352 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:35.744359 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:35.744367 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:35 GMT
	I1107 23:55:35.744785 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:36.240600 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:36.240625 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:36.240635 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:36.240642 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:36.243042 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:36.243063 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:36.243071 1520543 round_trippers.go:580]     Audit-Id: 1608f0ab-5c62-4f03-8d3d-055f5e4c7b06
	I1107 23:55:36.243077 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:36.243084 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:36.243090 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:36.243096 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:36.243104 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:36 GMT
	I1107 23:55:36.243226 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:36.740663 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:36.740690 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:36.740699 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:36.740706 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:36.743752 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:36.743778 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:36.743790 1520543 round_trippers.go:580]     Audit-Id: 0fa4f5cb-af10-4c2f-8bdc-015b2b09260b
	I1107 23:55:36.743805 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:36.743812 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:36.743818 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:36.743828 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:36.743837 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:36 GMT
	I1107 23:55:36.744231 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:36.744622 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:37.240869 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:37.240895 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:37.240905 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:37.240912 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:37.243493 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:37.243519 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:37.243528 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:37.243534 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:37.243540 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:37.243548 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:37 GMT
	I1107 23:55:37.243555 1520543 round_trippers.go:580]     Audit-Id: f1e001fe-82f8-4d4c-836a-f5e3ca523504
	I1107 23:55:37.243561 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:37.243806 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:37.741044 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:37.741075 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:37.741086 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:37.741093 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:37.743761 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:37.743781 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:37.743789 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:37.743796 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:37 GMT
	I1107 23:55:37.743802 1520543 round_trippers.go:580]     Audit-Id: 717e5bf9-d099-4645-a1ec-d8620ce0b0ec
	I1107 23:55:37.743809 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:37.743815 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:37.743821 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:37.743978 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:38.240762 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:38.240804 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:38.240815 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:38.240822 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:38.243306 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:38.243332 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:38.243341 1520543 round_trippers.go:580]     Audit-Id: 59ed166d-8567-4664-9250-eb89eb3c4d86
	I1107 23:55:38.243348 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:38.243354 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:38.243360 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:38.243371 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:38.243379 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:38 GMT
	I1107 23:55:38.243579 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:38.740634 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:38.740658 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:38.740667 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:38.740674 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:38.743665 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:38.743688 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:38.743696 1520543 round_trippers.go:580]     Audit-Id: 9ffe616f-8568-42ae-9aab-45f2e87808a7
	I1107 23:55:38.743703 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:38.743709 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:38.743716 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:38.743722 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:38.743728 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:38 GMT
	I1107 23:55:38.743805 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:39.240365 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:39.240394 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:39.240403 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:39.240412 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:39.242950 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:39.242977 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:39.242986 1520543 round_trippers.go:580]     Audit-Id: 672649c6-2f86-44e2-bd6d-542b693feb47
	I1107 23:55:39.242993 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:39.242999 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:39.243005 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:39.243012 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:39.243018 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:39 GMT
	I1107 23:55:39.243147 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:39.243526 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:39.740205 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:39.740235 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:39.740245 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:39.740252 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:39.742980 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:39.743002 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:39.743010 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:39.743017 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:39.743023 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:39.743029 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:39.743035 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:39 GMT
	I1107 23:55:39.743042 1520543 round_trippers.go:580]     Audit-Id: 7f822c43-dd1b-4266-8729-66f6a1d3d08b
	I1107 23:55:39.743138 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:40.240706 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:40.240776 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:40.240788 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:40.240795 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:40.243286 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:40.243307 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:40.243316 1520543 round_trippers.go:580]     Audit-Id: 7f0ed78a-5e40-46b2-b31b-376339a79e7a
	I1107 23:55:40.243323 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:40.243329 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:40.243336 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:40.243342 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:40.243360 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:40 GMT
	I1107 23:55:40.243967 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:40.741103 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:40.741128 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:40.741138 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:40.741145 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:40.743589 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:40.743614 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:40.743623 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:40.743630 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:40.743636 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:40.743642 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:40.743649 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:40 GMT
	I1107 23:55:40.743658 1520543 round_trippers.go:580]     Audit-Id: 638b1f36-29e8-4db8-a264-6f3c74395650
	I1107 23:55:40.743853 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:41.240850 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:41.240875 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:41.240886 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:41.240894 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:41.243463 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:41.243486 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:41.243495 1520543 round_trippers.go:580]     Audit-Id: b2749d3e-c73d-4b52-a2c0-41c6d11f79f8
	I1107 23:55:41.243502 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:41.243508 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:41.243514 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:41.243520 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:41.243527 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:41 GMT
	I1107 23:55:41.243673 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:41.244042 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:41.741126 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:41.741157 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:41.741167 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:41.741175 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:41.744657 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:41.744683 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:41.744692 1520543 round_trippers.go:580]     Audit-Id: 3491568c-99a6-49fc-b796-63c6ee067a66
	I1107 23:55:41.744699 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:41.744705 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:41.744712 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:41.744729 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:41.744737 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:41 GMT
	I1107 23:55:41.745173 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:42.240956 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:42.240993 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:42.241012 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:42.241020 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:42.248189 1520543 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1107 23:55:42.248215 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:42.248224 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:42.248231 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:42.248240 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:42.248247 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:42 GMT
	I1107 23:55:42.248254 1520543 round_trippers.go:580]     Audit-Id: 87712d0d-dd0b-4997-ad11-84619f240808
	I1107 23:55:42.248261 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:42.248415 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:42.740977 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:42.741029 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:42.741040 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:42.741055 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:42.743558 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:42.743583 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:42.743595 1520543 round_trippers.go:580]     Audit-Id: 6cb33430-564e-44f4-a0da-884da0f29988
	I1107 23:55:42.743602 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:42.743608 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:42.743614 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:42.743622 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:42.743632 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:42 GMT
	I1107 23:55:42.743791 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:43.240257 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:43.240285 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:43.240295 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:43.240303 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:43.242922 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:43.242958 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:43.242967 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:43.242973 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:43.242979 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:43.242985 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:43 GMT
	I1107 23:55:43.242992 1520543 round_trippers.go:580]     Audit-Id: 9c8c55df-7e59-4f78-bcd8-5f840fd84ea0
	I1107 23:55:43.242998 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:43.243280 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:43.740316 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:43.740341 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:43.740351 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:43.740358 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:43.742906 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:43.742926 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:43.742935 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:43.742942 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:43.742948 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:43.742954 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:43 GMT
	I1107 23:55:43.742962 1520543 round_trippers.go:580]     Audit-Id: d3caac5b-57b7-4f7c-9a62-698caf1df84a
	I1107 23:55:43.742972 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:43.743299 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:43.743679 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:44.240357 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:44.240381 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:44.240390 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:44.240397 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:44.243045 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:44.243068 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:44.243076 1520543 round_trippers.go:580]     Audit-Id: 3c01d92b-4b4b-4003-bc37-19bce06afa52
	I1107 23:55:44.243082 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:44.243089 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:44.243095 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:44.243101 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:44.243111 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:44 GMT
	I1107 23:55:44.243421 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:44.740225 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:44.740252 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:44.740262 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:44.740268 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:44.743022 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:44.743044 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:44.743053 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:44.743060 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:44 GMT
	I1107 23:55:44.743067 1520543 round_trippers.go:580]     Audit-Id: c2c11aac-f1fb-4334-84d4-a63ba628cd7b
	I1107 23:55:44.743073 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:44.743083 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:44.743090 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:44.743182 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:45.241185 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:45.241234 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:45.241246 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:45.241253 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:45.245310 1520543 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:55:45.245343 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:45.245353 1520543 round_trippers.go:580]     Audit-Id: 0acf3076-4a1e-4ac9-996c-e70c1221f3e5
	I1107 23:55:45.245360 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:45.245367 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:45.245373 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:45.245379 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:45.245387 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:45 GMT
	I1107 23:55:45.245899 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:45.740168 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:45.740196 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:45.740207 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:45.740214 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:45.742914 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:45.742940 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:45.742949 1520543 round_trippers.go:580]     Audit-Id: 6bc909f9-f97e-4209-b887-5cf8ebf1ac42
	I1107 23:55:45.742955 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:45.742961 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:45.742967 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:45.742975 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:45.742986 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:45 GMT
	I1107 23:55:45.743118 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:46.240197 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:46.240224 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:46.240246 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:46.240257 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:46.242864 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:46.242887 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:46.242896 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:46.242903 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:46.242910 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:46.242916 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:46.242922 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:46 GMT
	I1107 23:55:46.242929 1520543 round_trippers.go:580]     Audit-Id: 8daa2e8c-3c00-4d87-9aab-3bd35d753ddd
	I1107 23:55:46.243416 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:46.243806 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:46.740816 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:46.740840 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:46.740850 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:46.740857 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:46.743433 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:46.743454 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:46.743463 1520543 round_trippers.go:580]     Audit-Id: 6bf5c9c3-3cfc-400c-ba23-7de786c60b7a
	I1107 23:55:46.743469 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:46.743475 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:46.743482 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:46.743488 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:46.743494 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:46 GMT
	I1107 23:55:46.743631 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:47.240756 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:47.240782 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:47.240792 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:47.240800 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:47.243244 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:47.243271 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:47.243280 1520543 round_trippers.go:580]     Audit-Id: a80fefb5-d1a9-4208-a72d-2368cd1b7cdc
	I1107 23:55:47.243286 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:47.243292 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:47.243300 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:47.243311 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:47.243317 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:47 GMT
	I1107 23:55:47.243564 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:47.740241 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:47.740276 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:47.740287 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:47.740295 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:47.742853 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:47.742875 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:47.742883 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:47.742890 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:47 GMT
	I1107 23:55:47.742896 1520543 round_trippers.go:580]     Audit-Id: bb2d8b6d-a09a-451f-8712-a1b7b3a6efd7
	I1107 23:55:47.742902 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:47.742908 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:47.742914 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:47.743018 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:48.241164 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:48.241192 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:48.241203 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:48.241210 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:48.243884 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:48.243912 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:48.243923 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:48.243930 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:48 GMT
	I1107 23:55:48.243936 1520543 round_trippers.go:580]     Audit-Id: 83a8ccb1-9cac-4c87-8de3-7880287dca7b
	I1107 23:55:48.243943 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:48.243950 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:48.243956 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:48.244069 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:48.244442 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:48.740132 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:48.740156 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:48.740166 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:48.740174 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:48.742589 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:48.742618 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:48.742627 1520543 round_trippers.go:580]     Audit-Id: 986794bd-53b7-4a1c-afaa-be1320872938
	I1107 23:55:48.742634 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:48.742640 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:48.742647 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:48.742665 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:48.742673 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:48 GMT
	I1107 23:55:48.743038 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:49.241170 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:49.241196 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:49.241205 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:49.241212 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:49.243858 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:49.243883 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:49.243891 1520543 round_trippers.go:580]     Audit-Id: d3569a70-1855-40b8-ad95-355cdcc3aa98
	I1107 23:55:49.243898 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:49.243904 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:49.243910 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:49.243917 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:49.243923 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:49 GMT
	I1107 23:55:49.244099 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:49.740780 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:49.740804 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:49.740820 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:49.740827 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:49.744340 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:49.744370 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:49.744379 1520543 round_trippers.go:580]     Audit-Id: b95f77a0-6bd2-45ad-a112-3e0b6253926d
	I1107 23:55:49.744386 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:49.744392 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:49.744399 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:49.744405 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:49.744412 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:49 GMT
	I1107 23:55:49.744514 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:50.240684 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:50.240714 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:50.240728 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:50.240736 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:50.244074 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:50.244101 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:50.244109 1520543 round_trippers.go:580]     Audit-Id: 723446d3-0718-4183-a333-edbbec7fb2a2
	I1107 23:55:50.244137 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:50.244147 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:50.244157 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:50.244164 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:50.244175 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:50 GMT
	I1107 23:55:50.244287 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:50.244696 1520543 node_ready.go:58] node "multinode-898977-m02" has status "Ready":"False"
	I1107 23:55:50.740408 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:50.740436 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:50.740446 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:50.740454 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:50.743332 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:50.743359 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:50.743369 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:50 GMT
	I1107 23:55:50.743375 1520543 round_trippers.go:580]     Audit-Id: b84145a7-9bc6-47b1-9aa3-b7807435fb77
	I1107 23:55:50.743382 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:50.743389 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:50.743397 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:50.743404 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:50.743721 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:51.240199 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:51.240228 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.240239 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.240246 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.242875 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.242896 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.242906 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.242913 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.242919 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.242926 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.242932 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.242938 1520543 round_trippers.go:580]     Audit-Id: 4bd907bf-311e-4890-ba53-c2067c194344
	I1107 23:55:51.243121 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"477","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1107 23:55:51.741047 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:51.741070 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.741081 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.741089 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.743642 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.743667 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.743676 1520543 round_trippers.go:580]     Audit-Id: 62055ea9-8e7d-45bf-9f7a-ada4882f43b8
	I1107 23:55:51.743682 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.743688 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.743695 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.743701 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.743711 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.743816 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"499","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I1107 23:55:51.744200 1520543 node_ready.go:49] node "multinode-898977-m02" has status "Ready":"True"
	I1107 23:55:51.744218 1520543 node_ready.go:38] duration metric: took 31.011366852s waiting for node "multinode-898977-m02" to be "Ready" ...
	I1107 23:55:51.744229 1520543 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:51.744290 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:55:51.744302 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.744310 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.744317 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.747997 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:51.748023 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.748032 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.748038 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.748045 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.748051 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.748057 1520543 round_trippers.go:580]     Audit-Id: 9e7924a4-ce0c-429f-a93e-e1c8b391a043
	I1107 23:55:51.748063 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.748896 1520543 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"412","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1107 23:55:51.751813 1520543 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5822m" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.751903 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5822m
	I1107 23:55:51.751916 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.751925 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.751932 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.754462 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.754486 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.754495 1520543 round_trippers.go:580]     Audit-Id: 515d45b7-1147-46a0-8c54-1c932bf058f3
	I1107 23:55:51.754501 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.754507 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.754514 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.754526 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.754532 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.754735 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5822m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0946267b-9eb0-42c0-8451-34a99c6055fa","resourceVersion":"412","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d5b69ec3-898c-409c-aa7f-29151e434a62","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5b69ec3-898c-409c-aa7f-29151e434a62\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1107 23:55:51.755249 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:51.755268 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.755279 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.755286 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.757689 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.757707 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.757715 1520543 round_trippers.go:580]     Audit-Id: 0700f627-1e69-4801-aac9-6be53971b0db
	I1107 23:55:51.757722 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.757728 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.757734 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.757740 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.757746 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.757908 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:51.758320 1520543 pod_ready.go:92] pod "coredns-5dd5756b68-5822m" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.758333 1520543 pod_ready.go:81] duration metric: took 6.49314ms waiting for pod "coredns-5dd5756b68-5822m" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.758344 1520543 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.758405 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-898977
	I1107 23:55:51.758410 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.758418 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.758425 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.760817 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.760844 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.760852 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.760859 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.760865 1520543 round_trippers.go:580]     Audit-Id: 6930daa0-d0ba-48dd-ad9a-7bfe11996002
	I1107 23:55:51.760872 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.760878 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.760885 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.760980 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-898977","namespace":"kube-system","uid":"f044e6fe-c11b-4c4c-86b9-4128bb0094a1","resourceVersion":"384","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"237353557045024b06e23bafb1a554bc","kubernetes.io/config.mirror":"237353557045024b06e23bafb1a554bc","kubernetes.io/config.seen":"2023-11-07T23:54:18.228048197Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1107 23:55:51.761437 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:51.761454 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.761463 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.761470 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.763792 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.763813 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.763822 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.763828 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.763834 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.763840 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.763846 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.763853 1520543 round_trippers.go:580]     Audit-Id: 3ca674b8-f7ba-4cdd-b3c8-b8025f53eea3
	I1107 23:55:51.764078 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:51.764484 1520543 pod_ready.go:92] pod "etcd-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.764503 1520543 pod_ready.go:81] duration metric: took 6.152293ms waiting for pod "etcd-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.764520 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.764583 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-898977
	I1107 23:55:51.764594 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.764603 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.764611 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.767048 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.767068 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.767078 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.767084 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.767090 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.767096 1520543 round_trippers.go:580]     Audit-Id: 6311cb0b-1a97-4205-ad96-c60eddd7f052
	I1107 23:55:51.767103 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.767109 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.767458 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-898977","namespace":"kube-system","uid":"421e7824-45c9-4241-a678-ab9289aad2e2","resourceVersion":"385","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"79f694efd9903456acbe1877608e409c","kubernetes.io/config.mirror":"79f694efd9903456acbe1877608e409c","kubernetes.io/config.seen":"2023-11-07T23:54:18.228053793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1107 23:55:51.768008 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:51.768027 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.768036 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.768043 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.770353 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.770403 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.770411 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.770418 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.770424 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.770432 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.770446 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.770456 1520543 round_trippers.go:580]     Audit-Id: ff587f6b-afcf-4bec-bac6-e349f2536693
	I1107 23:55:51.770606 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:51.770986 1520543 pod_ready.go:92] pod "kube-apiserver-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.771004 1520543 pod_ready.go:81] duration metric: took 6.472446ms waiting for pod "kube-apiserver-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.771015 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.771079 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-898977
	I1107 23:55:51.771089 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.771098 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.771105 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.774708 1520543 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:55:51.774734 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.774742 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.774749 1520543 round_trippers.go:580]     Audit-Id: 98246974-18cf-4933-a4f9-c427d00f9aa0
	I1107 23:55:51.774755 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.774762 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.774768 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.774775 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.775050 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-898977","namespace":"kube-system","uid":"f99e3f68-3118-43cc-b04a-e031a0b53897","resourceVersion":"386","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7ecb0666ede4cd952e8c73745dd34a88","kubernetes.io/config.mirror":"7ecb0666ede4cd952e8c73745dd34a88","kubernetes.io/config.seen":"2023-11-07T23:54:18.228055163Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1107 23:55:51.775569 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:51.775589 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.775599 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.775608 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.780032 1520543 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:55:51.780096 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.780119 1520543 round_trippers.go:580]     Audit-Id: 847c8260-8f22-4104-9331-5412a5ae5556
	I1107 23:55:51.780140 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.780177 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.780201 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.780227 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.780242 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.780416 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:51.780828 1520543 pod_ready.go:92] pod "kube-controller-manager-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.780845 1520543 pod_ready.go:81] duration metric: took 9.818978ms waiting for pod "kube-controller-manager-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.780859 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v949" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.941144 1520543 request.go:629] Waited for 160.216978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2v949
	I1107 23:55:51.941237 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2v949
	I1107 23:55:51.941247 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:51.941257 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:51.941264 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:51.943952 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:51.944026 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:51.944047 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:51.944069 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:51.944103 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:51 GMT
	I1107 23:55:51.944158 1520543 round_trippers.go:580]     Audit-Id: 3c2cd1ff-414f-41af-a53a-59837fc4ef2f
	I1107 23:55:51.944178 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:51.944193 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:51.944330 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2v949","generateName":"kube-proxy-","namespace":"kube-system","uid":"2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8","resourceVersion":"377","creationTimestamp":"2023-11-07T23:54:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3f9ab921-6f58-4e7b-af20-e65af3cf1e74","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3f9ab921-6f58-4e7b-af20-e65af3cf1e74\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1107 23:55:52.141142 1520543 request.go:629] Waited for 196.280881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:52.141263 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:52.141295 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:52.141323 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:52.141344 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:52.143894 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:52.143934 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:52.143970 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:52.143979 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:52 GMT
	I1107 23:55:52.143989 1520543 round_trippers.go:580]     Audit-Id: f60ea566-ef1c-45fc-b773-9eb66bf4db4f
	I1107 23:55:52.143996 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:52.144006 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:52.144013 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:52.144206 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:52.144619 1520543 pod_ready.go:92] pod "kube-proxy-2v949" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:52.144636 1520543 pod_ready.go:81] duration metric: took 363.767713ms waiting for pod "kube-proxy-2v949" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:52.144648 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hhxj9" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:52.341969 1520543 request.go:629] Waited for 197.23677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hhxj9
	I1107 23:55:52.342092 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hhxj9
	I1107 23:55:52.342107 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:52.342116 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:52.342144 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:52.344812 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:52.344838 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:52.344846 1520543 round_trippers.go:580]     Audit-Id: 1cebe66a-5c86-4a19-9bed-17ab20075180
	I1107 23:55:52.344853 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:52.344866 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:52.344873 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:52.344880 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:52.344886 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:52 GMT
	I1107 23:55:52.345032 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hhxj9","generateName":"kube-proxy-","namespace":"kube-system","uid":"c046b045-fdcc-4e54-89b9-d639bf54f7ed","resourceVersion":"465","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3f9ab921-6f58-4e7b-af20-e65af3cf1e74","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3f9ab921-6f58-4e7b-af20-e65af3cf1e74\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:55:52.541926 1520543 request.go:629] Waited for 196.4081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:52.542064 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977-m02
	I1107 23:55:52.542079 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:52.542089 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:52.542097 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:52.544780 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:52.544839 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:52.544862 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:52 GMT
	I1107 23:55:52.544884 1520543 round_trippers.go:580]     Audit-Id: a953ef59-cee3-4ef6-b29c-74cd417fbbaa
	I1107 23:55:52.544899 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:52.544917 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:52.544923 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:52.544932 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:52.545037 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977-m02","uid":"7ed5905f-83ed-494d-8b41-e284a69f04b1","resourceVersion":"499","creationTimestamp":"2023-11-07T23:55:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:55:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I1107 23:55:52.545415 1520543 pod_ready.go:92] pod "kube-proxy-hhxj9" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:52.545434 1520543 pod_ready.go:81] duration metric: took 400.779571ms waiting for pod "kube-proxy-hhxj9" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:52.545445 1520543 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:52.741900 1520543 request.go:629] Waited for 196.388752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-898977
	I1107 23:55:52.741988 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-898977
	I1107 23:55:52.742001 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:52.742010 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:52.742017 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:52.744619 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:52.744645 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:52.744653 1520543 round_trippers.go:580]     Audit-Id: 3be502ad-d966-4779-a8b7-231cac3ad922
	I1107 23:55:52.744659 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:52.744666 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:52.744675 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:52.744681 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:52.744688 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:52 GMT
	I1107 23:55:52.744852 1520543 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-898977","namespace":"kube-system","uid":"7845b1ea-a5fd-4e03-8157-ae59da7d6651","resourceVersion":"383","creationTimestamp":"2023-11-07T23:54:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad918cb511f596afccb23f4947338cf7","kubernetes.io/config.mirror":"ad918cb511f596afccb23f4947338cf7","kubernetes.io/config.seen":"2023-11-07T23:54:18.228056254Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:54:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1107 23:55:52.941646 1520543 request.go:629] Waited for 196.3287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:52.941706 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-898977
	I1107 23:55:52.941713 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:52.941723 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:52.941735 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:52.944198 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:52.944222 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:52.944231 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:52 GMT
	I1107 23:55:52.944237 1520543 round_trippers.go:580]     Audit-Id: 9284f3da-33dd-4dfd-90c8-0c0515ecf603
	I1107 23:55:52.944251 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:52.944258 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:52.944268 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:52.944275 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:52.944376 1520543 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:54:14Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1107 23:55:52.944797 1520543 pod_ready.go:92] pod "kube-scheduler-multinode-898977" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:52.944816 1520543 pod_ready.go:81] duration metric: took 399.363279ms waiting for pod "kube-scheduler-multinode-898977" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:52.944830 1520543 pod_ready.go:38] duration metric: took 1.200591395s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:52.944847 1520543 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:55:52.944904 1520543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:55:52.959321 1520543 system_svc.go:56] duration metric: took 14.464879ms WaitForService to wait for kubelet.
	I1107 23:55:52.959349 1520543 kubeadm.go:581] duration metric: took 32.271438626s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:55:52.959368 1520543 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:55:53.141752 1520543 request.go:629] Waited for 182.310478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1107 23:55:53.141839 1520543 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1107 23:55:53.141845 1520543 round_trippers.go:469] Request Headers:
	I1107 23:55:53.141854 1520543 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:55:53.141861 1520543 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1107 23:55:53.144769 1520543 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:55:53.144805 1520543 round_trippers.go:577] Response Headers:
	I1107 23:55:53.144815 1520543 round_trippers.go:580]     Audit-Id: b4777458-ab4e-49cc-8c6f-65abcf2c5d84
	I1107 23:55:53.144821 1520543 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:55:53.144828 1520543 round_trippers.go:580]     Content-Type: application/json
	I1107 23:55:53.144835 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 99101ecd-e843-4aff-88a5-42979eacf8d2
	I1107 23:55:53.144842 1520543 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9b2a1ed8-bc9c-44c1-87e7-98e35bb9a780
	I1107 23:55:53.144852 1520543 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:55:53 GMT
	I1107 23:55:53.145307 1520543 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"500"},"items":[{"metadata":{"name":"multinode-898977","uid":"b1fdac9c-7195-45c5-922f-7efb6733a11b","resourceVersion":"396","creationTimestamp":"2023-11-07T23:54:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-898977","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-898977","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_54_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1107 23:55:53.146068 1520543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:55:53.146097 1520543 node_conditions.go:123] node cpu capacity is 2
	I1107 23:55:53.146108 1520543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:55:53.146122 1520543 node_conditions.go:123] node cpu capacity is 2
	I1107 23:55:53.146131 1520543 node_conditions.go:105] duration metric: took 186.757695ms to run NodePressure ...
	I1107 23:55:53.146146 1520543 start.go:228] waiting for startup goroutines ...
	I1107 23:55:53.146172 1520543 start.go:242] writing updated cluster config ...
	I1107 23:55:53.146498 1520543 ssh_runner.go:195] Run: rm -f paused
	I1107 23:55:53.202516 1520543 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:55:53.204632 1520543 out.go:177] * Done! kubectl is now configured to use "multinode-898977" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 07 23:55:03 multinode-898977 crio[898]: time="2023-11-07 23:55:03.483911019Z" level=info msg="Starting container: 922384f4c561d71b37d2778f3ad69d0a1f7fe53162725d9987172b846faad48b" id=116ca9fd-7b7e-42c8-a67a-832671a7300a name=/runtime.v1.RuntimeService/StartContainer
	Nov 07 23:55:03 multinode-898977 crio[898]: time="2023-11-07 23:55:03.501451524Z" level=info msg="Started container" PID=1928 containerID=922384f4c561d71b37d2778f3ad69d0a1f7fe53162725d9987172b846faad48b description=kube-system/storage-provisioner/storage-provisioner id=116ca9fd-7b7e-42c8-a67a-832671a7300a name=/runtime.v1.RuntimeService/StartContainer sandboxID=568ca5481357c36d83921bc8c2eed1da3935894db119453551c8c6889b9e37c4
	Nov 07 23:55:03 multinode-898977 crio[898]: time="2023-11-07 23:55:03.536251543Z" level=info msg="Created container 50da59e91b76f5c5c4a5bbf37c51cfb9abd10539d16a4bd90d58641e6863e575: kube-system/coredns-5dd5756b68-5822m/coredns" id=f9b3a998-3bdb-4079-9c20-b809f8eaffe9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:55:03 multinode-898977 crio[898]: time="2023-11-07 23:55:03.536971709Z" level=info msg="Starting container: 50da59e91b76f5c5c4a5bbf37c51cfb9abd10539d16a4bd90d58641e6863e575" id=e8a494dc-3e8b-4bd4-9f96-fca3f291daf5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 07 23:55:03 multinode-898977 crio[898]: time="2023-11-07 23:55:03.555525457Z" level=info msg="Started container" PID=1956 containerID=50da59e91b76f5c5c4a5bbf37c51cfb9abd10539d16a4bd90d58641e6863e575 description=kube-system/coredns-5dd5756b68-5822m/coredns id=e8a494dc-3e8b-4bd4-9f96-fca3f291daf5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8b88b2c5dbc751f487ee9669c2b433e586ded14701aef94f0223373bd351dfd
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.434803130Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-f95qf/POD" id=fe1165b1-f22f-4beb-a275-9ad356cc64f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.434863675Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.449861005Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-f95qf Namespace:default ID:b2d858004f38585fe70f19d11acce432b2f2bec3c5d812d1f66c16d98e9e21ca UID:0c6683ad-a47f-4402-b668-ce400b7b9834 NetNS:/var/run/netns/9798cfa6-6b13-4cfa-a56d-36a69cba8bfe Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.449901514Z" level=info msg="Adding pod default_busybox-5bc68d56bd-f95qf to CNI network \"kindnet\" (type=ptp)"
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.467301671Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-f95qf Namespace:default ID:b2d858004f38585fe70f19d11acce432b2f2bec3c5d812d1f66c16d98e9e21ca UID:0c6683ad-a47f-4402-b668-ce400b7b9834 NetNS:/var/run/netns/9798cfa6-6b13-4cfa-a56d-36a69cba8bfe Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.467451619Z" level=info msg="Checking pod default_busybox-5bc68d56bd-f95qf for CNI network kindnet (type=ptp)"
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.470085271Z" level=info msg="Ran pod sandbox b2d858004f38585fe70f19d11acce432b2f2bec3c5d812d1f66c16d98e9e21ca with infra container: default/busybox-5bc68d56bd-f95qf/POD" id=fe1165b1-f22f-4beb-a275-9ad356cc64f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.474878986Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=6b7bf808-01e5-422f-b2ba-7a82ec976959 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.475090595Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=6b7bf808-01e5-422f-b2ba-7a82ec976959 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.476415260Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=38fe965b-00cb-4eb2-8056-0e3bbafd7571 name=/runtime.v1.ImageService/PullImage
	Nov 07 23:55:54 multinode-898977 crio[898]: time="2023-11-07 23:55:54.477763138Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 07 23:55:55 multinode-898977 crio[898]: time="2023-11-07 23:55:55.418368704Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.190337649Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=38fe965b-00cb-4eb2-8056-0e3bbafd7571 name=/runtime.v1.ImageService/PullImage
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.191554328Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=39f7bcdb-2fb0-4c1f-87b1-c686c1afe538 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.192314593Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=39f7bcdb-2fb0-4c1f-87b1-c686c1afe538 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.193395716Z" level=info msg="Creating container: default/busybox-5bc68d56bd-f95qf/busybox" id=713670ee-c708-49f8-8835-392d12c3c238 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.193501241Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.273659116Z" level=info msg="Created container 4284b4df71d52883986c00d33fd81c8fc40536818661fa86370cde28923a9540: default/busybox-5bc68d56bd-f95qf/busybox" id=713670ee-c708-49f8-8835-392d12c3c238 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.274421284Z" level=info msg="Starting container: 4284b4df71d52883986c00d33fd81c8fc40536818661fa86370cde28923a9540" id=45ced378-4e94-49ac-a4d9-49475d94dfdd name=/runtime.v1.RuntimeService/StartContainer
	Nov 07 23:55:57 multinode-898977 crio[898]: time="2023-11-07 23:55:57.285659951Z" level=info msg="Started container" PID=2082 containerID=4284b4df71d52883986c00d33fd81c8fc40536818661fa86370cde28923a9540 description=default/busybox-5bc68d56bd-f95qf/busybox id=45ced378-4e94-49ac-a4d9-49475d94dfdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2d858004f38585fe70f19d11acce432b2f2bec3c5d812d1f66c16d98e9e21ca
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4284b4df71d52       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   b2d858004f385       busybox-5bc68d56bd-f95qf
	50da59e91b76f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      59 seconds ago       Running             coredns                   0                   a8b88b2c5dbc7       coredns-5dd5756b68-5822m
	922384f4c561d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      59 seconds ago       Running             storage-provisioner       0                   568ca5481357c       storage-provisioner
	1adab9acf4d84       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   2296c6ba88b97       kindnet-6hghf
	84d60313d2b87       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                      About a minute ago   Running             kube-proxy                0                   e88039dde9bfb       kube-proxy-2v949
	cfadaa439fb27       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                      About a minute ago   Running             kube-controller-manager   0                   df446553a6141       kube-controller-manager-multinode-898977
	c76bf02931199       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                      About a minute ago   Running             kube-scheduler            0                   2f8358bdba985       kube-scheduler-multinode-898977
	d956159624b89       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   da610d869e293       etcd-multinode-898977
	4cf9bcc824144       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                      About a minute ago   Running             kube-apiserver            0                   89fb8099b93ba       kube-apiserver-multinode-898977
	
	* 
	* ==> coredns [50da59e91b76f5c5c4a5bbf37c51cfb9abd10539d16a4bd90d58641e6863e575] <==
	* [INFO] 10.244.0.3:51240 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011126s
	[INFO] 10.244.1.2:44565 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160294s
	[INFO] 10.244.1.2:34896 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00170365s
	[INFO] 10.244.1.2:37340 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099667s
	[INFO] 10.244.1.2:55181 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131297s
	[INFO] 10.244.1.2:50422 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000944427s
	[INFO] 10.244.1.2:34697 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058765s
	[INFO] 10.244.1.2:48953 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084742s
	[INFO] 10.244.1.2:52822 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065517s
	[INFO] 10.244.0.3:60804 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133504s
	[INFO] 10.244.0.3:39287 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090748s
	[INFO] 10.244.0.3:56941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006962s
	[INFO] 10.244.0.3:43047 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076439s
	[INFO] 10.244.1.2:42048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098387s
	[INFO] 10.244.1.2:41488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103326s
	[INFO] 10.244.1.2:37892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064886s
	[INFO] 10.244.1.2:42559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092586s
	[INFO] 10.244.0.3:50654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127728s
	[INFO] 10.244.0.3:47274 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166325s
	[INFO] 10.244.0.3:51019 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118399s
	[INFO] 10.244.0.3:48115 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097082s
	[INFO] 10.244.1.2:55388 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122518s
	[INFO] 10.244.1.2:47934 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108093s
	[INFO] 10.244.1.2:50581 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079302s
	[INFO] 10.244.1.2:58413 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000062662s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-898977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-898977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=multinode-898977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_54_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:54:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-898977
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:56:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:55:03 +0000   Tue, 07 Nov 2023 23:54:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:55:03 +0000   Tue, 07 Nov 2023 23:54:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:55:03 +0000   Tue, 07 Nov 2023 23:54:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:55:03 +0000   Tue, 07 Nov 2023 23:55:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-898977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2b2c055acfd41a18729f1cc251b0dfa
	  System UUID:                1bf82bdd-d85c-4a5b-8bc7-8555bca20d3b
	  Boot ID:                    b7db73c9-0d39-49c2-bed0-71d8dac21d90
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f95qf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-5822m                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-multinode-898977                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-6hghf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-multinode-898977             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-898977    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-2v949                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-multinode-898977             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 89s   kube-proxy       
	  Normal  Starting                 104s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s  kubelet          Node multinode-898977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s  kubelet          Node multinode-898977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s  kubelet          Node multinode-898977 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s   node-controller  Node multinode-898977 event: Registered Node multinode-898977 in Controller
	  Normal  NodeReady                59s   kubelet          Node multinode-898977 status is now: NodeReady
	
	
	Name:               multinode-898977-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-898977-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:55:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-898977-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:56:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:55:51 +0000   Tue, 07 Nov 2023 23:55:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:55:51 +0000   Tue, 07 Nov 2023 23:55:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:55:51 +0000   Tue, 07 Nov 2023 23:55:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:55:51 +0000   Tue, 07 Nov 2023 23:55:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-898977-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cc69df74ecd4186a579df045f0019b6
	  System UUID:                351575ce-de6b-4800-a5ee-0bd2d3263765
	  Boot ID:                    b7db73c9-0d39-49c2-bed0-71d8dac21d90
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xprzg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-lc2rr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-hhxj9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-898977-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-898977-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-898977-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-898977-m02 event: Registered Node multinode-898977-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-898977-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001043] FS-Cache: O-key=[8] '76d7c90000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=00000000bc8cf6fc
	[  +0.001025] FS-Cache: N-key=[8] '76d7c90000000000'
	[  +0.003396] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000002bce5c9d
	[  +0.001174] FS-Cache: O-key=[8] '76d7c90000000000'
	[  +0.000722] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=00000000fd31c591
	[  +0.001038] FS-Cache: N-key=[8] '76d7c90000000000'
	[  +2.872966] FS-Cache: Duplicate cookie detected
	[  +0.000737] FS-Cache: O-cookie c=0000004d [p=0000004b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000006496a620
	[  +0.001050] FS-Cache: O-key=[8] '75d7c90000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001036] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=0000000055832ffe
	[  +0.001096] FS-Cache: N-key=[8] '75d7c90000000000'
	[  +0.450474] FS-Cache: Duplicate cookie detected
	[  +0.000703] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=000000005daa1a21{9p.inode} n=000000005bc02455
	[  +0.001040] FS-Cache: O-key=[8] '7bd7c90000000000'
	[  +0.000707] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=000000005daa1a21{9p.inode} n=000000009aecb77a
	[  +0.001068] FS-Cache: N-key=[8] '7bd7c90000000000'
	
	* 
	* ==> etcd [d956159624b897b171ba1d01f6736a566ac6ea39197e19b0429a1dd049975002] <==
	* {"level":"info","ts":"2023-11-07T23:54:10.908597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-07T23:54:10.908877Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-07T23:54:10.91058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-07T23:54:10.910754Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-07T23:54:10.910881Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-07T23:54:10.911463Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-07T23:54:10.911531Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-07T23:54:11.68202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-07T23:54:11.682156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-07T23:54:11.682222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-07T23:54:11.682277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-07T23:54:11.682319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-07T23:54:11.68237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-07T23:54:11.682413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-07T23:54:11.68606Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:54:11.690126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:54:11.690224Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:54:11.690271Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:54:11.690325Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-898977 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-07T23:54:11.690369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:54:11.69134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-07T23:54:11.69179Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:54:11.692646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-07T23:54:11.694028Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:54:11.698066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:56:02 up  6:38,  0 users,  load average: 1.72, 1.97, 2.04
	Linux multinode-898977 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1adab9acf4d84d10e494be3cc4c9ccb6c2c173dc11146c0e94507e1067c3f929] <==
	* I1107 23:54:32.622307       1 main.go:116] setting mtu 1500 for CNI 
	I1107 23:54:32.622349       1 main.go:146] kindnetd IP family: "ipv4"
	I1107 23:54:32.622387       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1107 23:55:02.847959       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1107 23:55:02.871649       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:55:02.871681       1 main.go:227] handling current node
	I1107 23:55:12.887324       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:55:12.887562       1 main.go:227] handling current node
	I1107 23:55:22.900425       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:55:22.900456       1 main.go:227] handling current node
	I1107 23:55:22.900471       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1107 23:55:22.900477       1 main.go:250] Node multinode-898977-m02 has CIDR [10.244.1.0/24] 
	I1107 23:55:22.900625       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1107 23:55:32.905135       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:55:32.905164       1 main.go:227] handling current node
	I1107 23:55:32.905176       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1107 23:55:32.905181       1 main.go:250] Node multinode-898977-m02 has CIDR [10.244.1.0/24] 
	I1107 23:55:42.918766       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:55:42.918794       1 main.go:227] handling current node
	I1107 23:55:42.918805       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1107 23:55:42.918812       1 main.go:250] Node multinode-898977-m02 has CIDR [10.244.1.0/24] 
	I1107 23:55:52.923230       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:55:52.923262       1 main.go:227] handling current node
	I1107 23:55:52.923272       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1107 23:55:52.923278       1 main.go:250] Node multinode-898977-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [4cf9bcc8241440b9b845e70b6ab0eef1a2e526aaf87c8491a20d76ea24d6baf1] <==
	* I1107 23:54:15.089890       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1107 23:54:15.091007       1 aggregator.go:166] initial CRD sync complete...
	I1107 23:54:15.091456       1 autoregister_controller.go:141] Starting autoregister controller
	I1107 23:54:15.091513       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1107 23:54:15.091550       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:54:15.091606       1 shared_informer.go:318] Caches are synced for configmaps
	I1107 23:54:15.092551       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1107 23:54:15.094752       1 controller.go:624] quota admission added evaluator for: namespaces
	E1107 23:54:15.124934       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1107 23:54:15.328553       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:54:15.791593       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1107 23:54:15.796254       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:54:15.796368       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 23:54:16.367916       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:54:16.412201       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 23:54:16.501767       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1107 23:54:16.509340       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1107 23:54:16.510511       1 controller.go:624] quota admission added evaluator for: endpoints
	I1107 23:54:16.515591       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:54:17.044349       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1107 23:54:18.117332       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1107 23:54:18.129214       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1107 23:54:18.147060       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1107 23:54:31.368901       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1107 23:54:31.462818       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [cfadaa439fb272a0ea2b4a7f700b56c33fed2e89cb1c65dc5108e795197df5d4] <==
	* I1107 23:54:32.088480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="179.92µs"
	I1107 23:55:03.046895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.775µs"
	I1107 23:55:03.075924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.571µs"
	I1107 23:55:04.504318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.621474ms"
	I1107 23:55:04.505052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.131µs"
	I1107 23:55:06.352408       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1107 23:55:19.899100       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-898977-m02\" does not exist"
	I1107 23:55:19.913022       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-898977-m02" podCIDRs=["10.244.1.0/24"]
	I1107 23:55:19.927337       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lc2rr"
	I1107 23:55:19.927441       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hhxj9"
	I1107 23:55:21.354445       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-898977-m02"
	I1107 23:55:21.354586       1 event.go:307] "Event occurred" object="multinode-898977-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-898977-m02 event: Registered Node multinode-898977-m02 in Controller"
	I1107 23:55:51.363553       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-898977-m02"
	I1107 23:55:54.064518       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1107 23:55:54.085818       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-xprzg"
	I1107 23:55:54.116003       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-f95qf"
	I1107 23:55:54.142701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.94488ms"
	I1107 23:55:54.166017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.258684ms"
	I1107 23:55:54.198367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.294925ms"
	I1107 23:55:54.198542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.342µs"
	I1107 23:55:56.370115       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-xprzg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-xprzg"
	I1107 23:55:57.544263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.590214ms"
	I1107 23:55:57.544797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="83.495µs"
	I1107 23:55:57.585218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.449653ms"
	I1107 23:55:57.585295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.729µs"
	
	* 
	* ==> kube-proxy [84d60313d2b877c6f85fb953ce16cd2ae86685a4c04660e97df0a5a5eb24b2ed] <==
	* I1107 23:54:32.805552       1 server_others.go:69] "Using iptables proxy"
	I1107 23:54:32.832624       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1107 23:54:32.969156       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1107 23:54:32.976902       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:54:32.976945       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1107 23:54:32.976954       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1107 23:54:32.977030       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:54:32.977278       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:54:32.978388       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:54:32.982046       1 config.go:188] "Starting service config controller"
	I1107 23:54:32.982132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:54:32.982178       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:54:32.982215       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:54:32.982725       1 config.go:315] "Starting node config controller"
	I1107 23:54:32.982782       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:54:33.082944       1 shared_informer.go:318] Caches are synced for node config
	I1107 23:54:33.082984       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:54:33.083026       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c76bf029311990cb9a1de0f49edb64be303c20bf207d8b9f722e8d01c7872d26] <==
	* W1107 23:54:15.066577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:54:15.066614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 23:54:15.066707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:54:15.066727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:54:15.066773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:54:15.066811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:54:15.066829       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:54:15.066862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:54:15.866180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:54:15.866233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 23:54:15.881313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:54:15.881456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 23:54:16.009633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:54:16.009761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 23:54:16.059623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:54:16.059745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:54:16.067681       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 23:54:16.067795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:54:16.121496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:54:16.121528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:54:16.171795       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:54:16.171834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 23:54:16.306311       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:54:16.306439       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 23:54:19.319410       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.563695    1392 topology_manager.go:215] "Topology Admit Handler" podUID="12c0dff2-21a3-435f-aef2-d2201a778bc8" podNamespace="kube-system" podName="kindnet-6hghf"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721499    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8-kube-proxy\") pod \"kube-proxy-2v949\" (UID: \"2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8\") " pod="kube-system/kube-proxy-2v949"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721569    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2zt\" (UniqueName: \"kubernetes.io/projected/2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8-kube-api-access-ch2zt\") pod \"kube-proxy-2v949\" (UID: \"2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8\") " pod="kube-system/kube-proxy-2v949"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721602    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12c0dff2-21a3-435f-aef2-d2201a778bc8-lib-modules\") pod \"kindnet-6hghf\" (UID: \"12c0dff2-21a3-435f-aef2-d2201a778bc8\") " pod="kube-system/kindnet-6hghf"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721627    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/12c0dff2-21a3-435f-aef2-d2201a778bc8-cni-cfg\") pod \"kindnet-6hghf\" (UID: \"12c0dff2-21a3-435f-aef2-d2201a778bc8\") " pod="kube-system/kindnet-6hghf"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721650    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtln4\" (UniqueName: \"kubernetes.io/projected/12c0dff2-21a3-435f-aef2-d2201a778bc8-kube-api-access-gtln4\") pod \"kindnet-6hghf\" (UID: \"12c0dff2-21a3-435f-aef2-d2201a778bc8\") " pod="kube-system/kindnet-6hghf"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721676    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8-xtables-lock\") pod \"kube-proxy-2v949\" (UID: \"2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8\") " pod="kube-system/kube-proxy-2v949"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721700    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8-lib-modules\") pod \"kube-proxy-2v949\" (UID: \"2ce4ab97-e8e0-4e78-9f7e-d3fb4c4f46c8\") " pod="kube-system/kube-proxy-2v949"
	Nov 07 23:54:31 multinode-898977 kubelet[1392]: I1107 23:54:31.721723    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12c0dff2-21a3-435f-aef2-d2201a778bc8-xtables-lock\") pod \"kindnet-6hghf\" (UID: \"12c0dff2-21a3-435f-aef2-d2201a778bc8\") " pod="kube-system/kindnet-6hghf"
	Nov 07 23:54:32 multinode-898977 kubelet[1392]: W1107 23:54:32.222847    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/crio-e88039dde9bfb7de23e982e315e9d4ea812316c3b320cea15672dd6b18331a87 WatchSource:0}: Error finding container e88039dde9bfb7de23e982e315e9d4ea812316c3b320cea15672dd6b18331a87: Status 404 returned error can't find the container with id e88039dde9bfb7de23e982e315e9d4ea812316c3b320cea15672dd6b18331a87
	Nov 07 23:54:32 multinode-898977 kubelet[1392]: W1107 23:54:32.225239    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/crio-2296c6ba88b971dcd017b91acf01c734c0ca07af638f9310f63942626c08a3fd WatchSource:0}: Error finding container 2296c6ba88b971dcd017b91acf01c734c0ca07af638f9310f63942626c08a3fd: Status 404 returned error can't find the container with id 2296c6ba88b971dcd017b91acf01c734c0ca07af638f9310f63942626c08a3fd
	Nov 07 23:54:33 multinode-898977 kubelet[1392]: I1107 23:54:33.430358    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2v949" podStartSLOduration=2.430311831 podCreationTimestamp="2023-11-07 23:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:54:33.41628904 +0000 UTC m=+15.333024578" watchObservedRunningTime="2023-11-07 23:54:33.430311831 +0000 UTC m=+15.347047353"
	Nov 07 23:54:38 multinode-898977 kubelet[1392]: I1107 23:54:38.313776    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6hghf" podStartSLOduration=7.313735892 podCreationTimestamp="2023-11-07 23:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:54:33.430705633 +0000 UTC m=+15.347441147" watchObservedRunningTime="2023-11-07 23:54:38.313735892 +0000 UTC m=+20.230471414"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.010703    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.045660    1392 topology_manager.go:215] "Topology Admit Handler" podUID="0946267b-9eb0-42c0-8451-34a99c6055fa" podNamespace="kube-system" podName="coredns-5dd5756b68-5822m"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.049906    1392 topology_manager.go:215] "Topology Admit Handler" podUID="1e92762e-f03a-4e20-9228-9a7ee152c9d1" podNamespace="kube-system" podName="storage-provisioner"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.159945    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6b4\" (UniqueName: \"kubernetes.io/projected/0946267b-9eb0-42c0-8451-34a99c6055fa-kube-api-access-jk6b4\") pod \"coredns-5dd5756b68-5822m\" (UID: \"0946267b-9eb0-42c0-8451-34a99c6055fa\") " pod="kube-system/coredns-5dd5756b68-5822m"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.160008    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e92762e-f03a-4e20-9228-9a7ee152c9d1-tmp\") pod \"storage-provisioner\" (UID: \"1e92762e-f03a-4e20-9228-9a7ee152c9d1\") " pod="kube-system/storage-provisioner"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.160038    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0946267b-9eb0-42c0-8451-34a99c6055fa-config-volume\") pod \"coredns-5dd5756b68-5822m\" (UID: \"0946267b-9eb0-42c0-8451-34a99c6055fa\") " pod="kube-system/coredns-5dd5756b68-5822m"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: I1107 23:55:03.160077    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbgq\" (UniqueName: \"kubernetes.io/projected/1e92762e-f03a-4e20-9228-9a7ee152c9d1-kube-api-access-5pbgq\") pod \"storage-provisioner\" (UID: \"1e92762e-f03a-4e20-9228-9a7ee152c9d1\") " pod="kube-system/storage-provisioner"
	Nov 07 23:55:03 multinode-898977 kubelet[1392]: W1107 23:55:03.406770    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/crio-a8b88b2c5dbc751f487ee9669c2b433e586ded14701aef94f0223373bd351dfd WatchSource:0}: Error finding container a8b88b2c5dbc751f487ee9669c2b433e586ded14701aef94f0223373bd351dfd: Status 404 returned error can't find the container with id a8b88b2c5dbc751f487ee9669c2b433e586ded14701aef94f0223373bd351dfd
	Nov 07 23:55:04 multinode-898977 kubelet[1392]: I1107 23:55:04.491139    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.491095832 podCreationTimestamp="2023-11-07 23:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:55:04.478558255 +0000 UTC m=+46.395293777" watchObservedRunningTime="2023-11-07 23:55:04.491095832 +0000 UTC m=+46.407831346"
	Nov 07 23:55:54 multinode-898977 kubelet[1392]: I1107 23:55:54.133137    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5822m" podStartSLOduration=83.133097025 podCreationTimestamp="2023-11-07 23:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:55:04.492121013 +0000 UTC m=+46.408856527" watchObservedRunningTime="2023-11-07 23:55:54.133097025 +0000 UTC m=+96.049832547"
	Nov 07 23:55:54 multinode-898977 kubelet[1392]: I1107 23:55:54.133434    1392 topology_manager.go:215] "Topology Admit Handler" podUID="0c6683ad-a47f-4402-b668-ce400b7b9834" podNamespace="default" podName="busybox-5bc68d56bd-f95qf"
	Nov 07 23:55:54 multinode-898977 kubelet[1392]: I1107 23:55:54.172026    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vv2l\" (UniqueName: \"kubernetes.io/projected/0c6683ad-a47f-4402-b668-ce400b7b9834-kube-api-access-8vv2l\") pod \"busybox-5bc68d56bd-f95qf\" (UID: \"0c6683ad-a47f-4402-b668-ce400b7b9834\") " pod="default/busybox-5bc68d56bd-f95qf"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-898977 -n multinode-898977
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-898977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.2019984846.exe start -p running-upgrade-519102 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.2019984846.exe start -p running-upgrade-519102 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m4.438988319s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-519102 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-519102 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.94691628s)

                                                
                                                
-- stdout --
	* [running-upgrade-519102] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-519102 in cluster running-upgrade-519102
	* Pulling base image ...
	* Updating the running docker "running-upgrade-519102" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:12:49.656149 1580843 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:12:49.656327 1580843 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:49.656341 1580843 out.go:309] Setting ErrFile to fd 2...
	I1108 00:12:49.656347 1580843 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:49.656695 1580843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1108 00:12:49.657760 1580843 out.go:303] Setting JSON to false
	I1108 00:12:49.659112 1580843 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24919,"bootTime":1699377451,"procs":418,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1108 00:12:49.659196 1580843 start.go:138] virtualization:  
	I1108 00:12:49.661992 1580843 out.go:177] * [running-upgrade-519102] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 00:12:49.664540 1580843 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:12:49.666451 1580843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:12:49.664726 1580843 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1108 00:12:49.664742 1580843 notify.go:220] Checking for updates...
	I1108 00:12:49.670469 1580843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1108 00:12:49.672510 1580843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1108 00:12:49.674472 1580843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 00:12:49.676231 1580843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:12:49.678489 1580843 config.go:182] Loaded profile config "running-upgrade-519102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:12:49.681074 1580843 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1108 00:12:49.682894 1580843 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:12:49.720711 1580843 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 00:12:49.720828 1580843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:12:49.828771 1580843 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-08 00:12:49.815194161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:12:49.828906 1580843 docker.go:295] overlay module found
	I1108 00:12:49.831604 1580843 out.go:177] * Using the docker driver based on existing profile
	I1108 00:12:49.833232 1580843 start.go:298] selected driver: docker
	I1108 00:12:49.833249 1580843 start.go:902] validating driver "docker" against &{Name:running-upgrade-519102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-519102 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.191 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:12:49.833372 1580843 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:12:49.834656 1580843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:12:49.849189 1580843 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1108 00:12:49.929616 1580843 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-08 00:12:49.918886111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:12:49.929941 1580843 cni.go:84] Creating CNI manager for ""
	I1108 00:12:49.929960 1580843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 00:12:49.930044 1580843 start_flags.go:323] config:
	{Name:running-upgrade-519102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-519102 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.191 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:12:49.933256 1580843 out.go:177] * Starting control plane node running-upgrade-519102 in cluster running-upgrade-519102
	I1108 00:12:49.935080 1580843 cache.go:121] Beginning downloading kic base image for docker with crio
	I1108 00:12:49.936953 1580843 out.go:177] * Pulling base image ...
	I1108 00:12:49.938604 1580843 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1108 00:12:49.938792 1580843 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1108 00:12:49.961838 1580843 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1108 00:12:49.961863 1580843 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1108 00:12:50.020450 1580843 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1108 00:12:50.020635 1580843 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/running-upgrade-519102/config.json ...
	I1108 00:12:50.020937 1580843 cache.go:194] Successfully downloaded all kic artifacts
	I1108 00:12:50.020997 1580843 start.go:365] acquiring machines lock for running-upgrade-519102: {Name:mk76f7588adde77760e3d2465b75f63caf1f3c97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.021061 1580843 start.go:369] acquired machines lock for "running-upgrade-519102" in 37.07µs
	I1108 00:12:50.021081 1580843 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:50.021089 1580843 fix.go:54] fixHost starting: 
	I1108 00:12:50.021368 1580843 cli_runner.go:164] Run: docker container inspect running-upgrade-519102 --format={{.State.Status}}
	I1108 00:12:50.021607 1580843 cache.go:107] acquiring lock: {Name:mkcd7bd8164a58c24f54ac63c7eba76b9f8dc423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.021677 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 00:12:50.021691 1580843 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.065µs
	I1108 00:12:50.021744 1580843 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 00:12:50.021764 1580843 cache.go:107] acquiring lock: {Name:mk5f4678cde071ac291daca98ee4b8d032ff92bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.021803 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1108 00:12:50.021812 1580843 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 49.813µs
	I1108 00:12:50.021819 1580843 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1108 00:12:50.021829 1580843 cache.go:107] acquiring lock: {Name:mk056c2b8d4fa01baa4d27688b624c39a4b0d073 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.021862 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1108 00:12:50.021871 1580843 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 43.052µs
	I1108 00:12:50.021878 1580843 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1108 00:12:50.021887 1580843 cache.go:107] acquiring lock: {Name:mk642d2c2ec64bded73e646a7e731120f64551b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.021920 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1108 00:12:50.021929 1580843 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 43.282µs
	I1108 00:12:50.021935 1580843 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1108 00:12:50.021960 1580843 cache.go:107] acquiring lock: {Name:mk9e35b0d2aaeaeda779068ba9df17902904c0e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.022054 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1108 00:12:50.022067 1580843 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 106.731µs
	I1108 00:12:50.022080 1580843 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1108 00:12:50.022090 1580843 cache.go:107] acquiring lock: {Name:mkfd477add759fc1236d874a521416746bbb0d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.022128 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1108 00:12:50.022136 1580843 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 47.515µs
	I1108 00:12:50.022145 1580843 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1108 00:12:50.022159 1580843 cache.go:107] acquiring lock: {Name:mk42aa21c87376b51008fca60e54933db2e47906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.022190 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1108 00:12:50.022198 1580843 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 45.439µs
	I1108 00:12:50.022205 1580843 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1108 00:12:50.022214 1580843 cache.go:107] acquiring lock: {Name:mka6ad01a28295ccdde91d33e10b2426036c340e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:50.022243 1580843 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1108 00:12:50.022253 1580843 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 39.778µs
	I1108 00:12:50.022263 1580843 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1108 00:12:50.022273 1580843 cache.go:87] Successfully saved all images to host disk.
	I1108 00:12:50.041569 1580843 fix.go:102] recreateIfNeeded on running-upgrade-519102: state=Running err=<nil>
	W1108 00:12:50.041603 1580843 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:50.044353 1580843 out.go:177] * Updating the running docker "running-upgrade-519102" container ...
	I1108 00:12:50.046145 1580843 machine.go:88] provisioning docker machine ...
	I1108 00:12:50.046175 1580843 ubuntu.go:169] provisioning hostname "running-upgrade-519102"
	I1108 00:12:50.046256 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:50.066181 1580843 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:50.066618 1580843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1108 00:12:50.066636 1580843 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-519102 && echo "running-upgrade-519102" | sudo tee /etc/hostname
	I1108 00:12:50.244312 1580843 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-519102
	
	I1108 00:12:50.244398 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:50.264719 1580843 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:50.265162 1580843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1108 00:12:50.265187 1580843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-519102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-519102/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-519102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:12:50.408526 1580843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:50.408604 1580843 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1108 00:12:50.408667 1580843 ubuntu.go:177] setting up certificates
	I1108 00:12:50.408694 1580843 provision.go:83] configureAuth start
	I1108 00:12:50.408831 1580843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-519102
	I1108 00:12:50.429246 1580843 provision.go:138] copyHostCerts
	I1108 00:12:50.429323 1580843 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1108 00:12:50.429343 1580843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1108 00:12:50.429422 1580843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1108 00:12:50.429525 1580843 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1108 00:12:50.429530 1580843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1108 00:12:50.429558 1580843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1108 00:12:50.429731 1580843 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1108 00:12:50.429741 1580843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1108 00:12:50.429788 1580843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1108 00:12:50.429848 1580843 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-519102 san=[192.168.70.191 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-519102]
	I1108 00:12:51.127994 1580843 provision.go:172] copyRemoteCerts
	I1108 00:12:51.128117 1580843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:12:51.128196 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:51.151104 1580843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/running-upgrade-519102/id_rsa Username:docker}
	I1108 00:12:51.274451 1580843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 00:12:51.313039 1580843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:12:51.337863 1580843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:12:51.362433 1580843 provision.go:86] duration metric: configureAuth took 953.695089ms
	I1108 00:12:51.362459 1580843 ubuntu.go:193] setting minikube options for container-runtime
	I1108 00:12:51.362664 1580843 config.go:182] Loaded profile config "running-upgrade-519102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:12:51.362774 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:51.383621 1580843 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:51.384027 1580843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1108 00:12:51.384048 1580843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:12:52.018965 1580843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:12:52.018994 1580843 machine.go:91] provisioned docker machine in 1.972830215s
	I1108 00:12:52.019019 1580843 start.go:300] post-start starting for "running-upgrade-519102" (driver="docker")
	I1108 00:12:52.019060 1580843 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:12:52.019143 1580843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:12:52.019215 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:52.039916 1580843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/running-upgrade-519102/id_rsa Username:docker}
	I1108 00:12:52.139596 1580843 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:12:52.143821 1580843 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1108 00:12:52.143850 1580843 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 00:12:52.143862 1580843 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1108 00:12:52.143877 1580843 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1108 00:12:52.143888 1580843 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1108 00:12:52.143960 1580843 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1108 00:12:52.144054 1580843 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1108 00:12:52.144175 1580843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:12:52.158819 1580843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1108 00:12:52.189882 1580843 start.go:303] post-start completed in 170.818349ms
	I1108 00:12:52.190108 1580843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 00:12:52.190178 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:52.209471 1580843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/running-upgrade-519102/id_rsa Username:docker}
	I1108 00:12:52.309543 1580843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 00:12:52.315281 1580843 fix.go:56] fixHost completed within 2.294186189s
	I1108 00:12:52.315304 1580843 start.go:83] releasing machines lock for "running-upgrade-519102", held for 2.294229257s
	I1108 00:12:52.315374 1580843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-519102
	I1108 00:12:52.334204 1580843 ssh_runner.go:195] Run: cat /version.json
	I1108 00:12:52.334252 1580843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:12:52.334259 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:52.334306 1580843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-519102
	I1108 00:12:52.360333 1580843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/running-upgrade-519102/id_rsa Username:docker}
	I1108 00:12:52.366147 1580843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/running-upgrade-519102/id_rsa Username:docker}
	W1108 00:12:52.463175 1580843 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1108 00:12:52.463289 1580843 ssh_runner.go:195] Run: systemctl --version
	I1108 00:12:52.630142 1580843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:12:52.808722 1580843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1108 00:12:52.815339 1580843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:52.839060 1580843 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1108 00:12:52.839194 1580843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:52.874933 1580843 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:12:52.875004 1580843 start.go:472] detecting cgroup driver to use...
	I1108 00:12:52.875052 1580843 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1108 00:12:52.875107 1580843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:12:52.919311 1580843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:12:52.932802 1580843 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:12:52.932949 1580843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:12:52.947983 1580843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:12:52.962844 1580843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1108 00:12:52.976701 1580843 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1108 00:12:52.976856 1580843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:12:53.125004 1580843 docker.go:219] disabling docker service ...
	I1108 00:12:53.125077 1580843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:12:53.139780 1580843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:12:53.160534 1580843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:12:53.304611 1580843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:12:53.463593 1580843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:12:53.477809 1580843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:12:53.496325 1580843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1108 00:12:53.496441 1580843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:12:53.512944 1580843 out.go:177] 
	W1108 00:12:53.514853 1580843 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1108 00:12:53.515030 1580843 out.go:239] * 
	* 
	W1108 00:12:53.516313 1580843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:12:53.519471 1580843 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-519102 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-08 00:12:53.547563684 +0000 UTC m=+2611.304888499
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-519102
helpers_test.go:235: (dbg) docker inspect running-upgrade-519102:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "36f78f6f8c532004bde66e53896321c2d805823b35618236c6f02b2f47229669",
	        "Created": "2023-11-08T00:12:04.142690863Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1577197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-08T00:12:04.567931929Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/36f78f6f8c532004bde66e53896321c2d805823b35618236c6f02b2f47229669/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/36f78f6f8c532004bde66e53896321c2d805823b35618236c6f02b2f47229669/hostname",
	        "HostsPath": "/var/lib/docker/containers/36f78f6f8c532004bde66e53896321c2d805823b35618236c6f02b2f47229669/hosts",
	        "LogPath": "/var/lib/docker/containers/36f78f6f8c532004bde66e53896321c2d805823b35618236c6f02b2f47229669/36f78f6f8c532004bde66e53896321c2d805823b35618236c6f02b2f47229669-json.log",
	        "Name": "/running-upgrade-519102",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-519102:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-519102",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04ed3ad045c3dc0b558c0173199c037c7045a5ea50fcbf5a8ed4f368e944cdb6-init/diff:/var/lib/docker/overlay2/fb46fc08507ef88cb98a75cbb8e51e2edbfbacd1ad3ee48437d8d02e51bbd584/diff:/var/lib/docker/overlay2/732bb3e2acd65f8c20a3fc625dc26b482ad7ff8de524d68c9f102cf81e373098/diff:/var/lib/docker/overlay2/c9331376376ceee7038a7dac57a989d5142cfd92db58b566dfb63a6a175637c1/diff:/var/lib/docker/overlay2/c92f97507d8ee1b06975047d65786f2a362dba27ce8859bd73fa660afda1db10/diff:/var/lib/docker/overlay2/e40d005b9d3279cbb3f1257982040a0583f8d9cffe7f2f112cf47b9526ce0177/diff:/var/lib/docker/overlay2/d073018fcd42cc537d6a4a2bdc69c606da8109180bf152c90336a61feef539bf/diff:/var/lib/docker/overlay2/d5de5d72558fc15914a457e10320ebdd06a062ccfac13a0f42125fc3c80e67da/diff:/var/lib/docker/overlay2/cc265c2fd0a1495c806e9eb888b53eb3b03206bdca7585d297673c480049af4c/diff:/var/lib/docker/overlay2/2e015019bfb131610e9fbdcc42c08ae3a4ddd5802d4a91686348f4763e4c5dab/diff:/var/lib/docker/overlay2/9450ca
cc74479ce78f9c195f64c0c3c91a7939a94f06aa1ffc11d3c1774f66d1/diff:/var/lib/docker/overlay2/7a46864b8a99cacbb42111850b3371b286d9ef251a5cb8f2770aabbdb0998ed6/diff:/var/lib/docker/overlay2/39f3aaf46b857714e0dd9f53615bdaf248fb0e4aa9aa3d08035333585e37bf05/diff:/var/lib/docker/overlay2/6b56ed8e52561a1ad8184c33bcee5a4673550cbb466b7a9068834bd1e1b42999/diff:/var/lib/docker/overlay2/f5620b211d23706e17e01ba1715ffbb32c6c4efc2294b8769dce8f718e630358/diff:/var/lib/docker/overlay2/8bd3c14252b9f65efc209ca6541b5421f5a3a53346804398f1de1b409eb26d4e/diff:/var/lib/docker/overlay2/907d2055b803c60c4ef70a456c95335756a56f02ba28133444af6d64dae614fe/diff:/var/lib/docker/overlay2/f8b344676b970831db0364310723d1133ce5cec6f434482eeefabdf0b1cac332/diff:/var/lib/docker/overlay2/690547549d65cabe6b801c8d1d9905c3fc92df1fb9810e0946fffa95ab0e83df/diff:/var/lib/docker/overlay2/6c03e6065bfe9b0dc5b49e7d942fa88fc3ec3de7a3184acb53aae33eee565ee5/diff:/var/lib/docker/overlay2/22af27f7cd4d1c7ac0e973c022cda56c965140610f2e64703419a43266e90605/diff:/var/lib/d
ocker/overlay2/1485e7a812566af688faa1de2e325be44f0428cd3814ce19f278ea21c6e3257b/diff:/var/lib/docker/overlay2/9ac29a2acf64f30caa079a4349fd6bfae37690976d6dc0c4fb2fcb1e15c180a4/diff:/var/lib/docker/overlay2/d2d4ac9e4c6dd14fae4d76bfe80b0fc00d599b1b8e983b4308afe83bc4e3d1df/diff:/var/lib/docker/overlay2/6e413040a59efd06e5797db74e35c52940d8dfe6763e370c73dc0f050623cf79/diff:/var/lib/docker/overlay2/8a252d786a6f2579d6da848d62d475c3695c9615d3f28b7c756192af89f2f85c/diff:/var/lib/docker/overlay2/b07b23e243d5fe1bd56988a797f49aed6cbe7651c4cd1ecbfbba0da095a98cef/diff:/var/lib/docker/overlay2/5d04e50dc6d326c6e56a3400790a3ba9dcbb082862a717af7053223a7e88c9e4/diff:/var/lib/docker/overlay2/68e2ec30cda386415a5b51ade2ee90ff907ec111874e95d4c86833eeb7687274/diff:/var/lib/docker/overlay2/23096aa2a71307fd8e8ca4f1b3aee468280aff47760fb12570b334b3df294196/diff:/var/lib/docker/overlay2/7c3c69d4d939641bb2474dde477b63120b3563a306d922460f59b5aacc9969b1/diff:/var/lib/docker/overlay2/aba8e8acd5337fa253eaf133d050a45f54516dd8d91f0fe5770d06752ec
6c623/diff:/var/lib/docker/overlay2/b7a85f0b1352d246d5a491920c31b2fec8c9b38e4bb56a54b41a84ac87a442be/diff:/var/lib/docker/overlay2/dc2db48f40a65b19720f772b9c160b4ebfdfcf11979484f041cdeb577d288c4a/diff:/var/lib/docker/overlay2/a5052bd5093008b3d7294f647eab7aacb6bc8502151192472cd0117f71cebc07/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04ed3ad045c3dc0b558c0173199c037c7045a5ea50fcbf5a8ed4f368e944cdb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04ed3ad045c3dc0b558c0173199c037c7045a5ea50fcbf5a8ed4f368e944cdb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04ed3ad045c3dc0b558c0173199c037c7045a5ea50fcbf5a8ed4f368e944cdb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-519102",
	                "Source": "/var/lib/docker/volumes/running-upgrade-519102/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-519102",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-519102",
	                "name.minikube.sigs.k8s.io": "running-upgrade-519102",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fbf3ad72423dd6e27447f1fb648b2b411d3f39af65466929d1c1a6815bf546c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3fbf3ad72423",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-519102": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.191"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "36f78f6f8c53",
	                        "running-upgrade-519102"
	                    ],
	                    "NetworkID": "5258288d1906c2aa607e334a2efccd20034ebd8681adebcf41989b8f8b205f25",
	                    "EndpointID": "0775a11e7d2293b74f2855846dc84e5a84b6927d4159dac8c3ae8f03af22aec5",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.191",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:bf",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-519102 -n running-upgrade-519102
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-519102 -n running-upgrade-519102: exit status 4 (397.701113ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:12:53.895423 1581426 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-519102" does not appear in /home/jenkins/minikube-integration/17585-1449649/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-519102" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-519102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-519102
E1108 00:12:56.101720 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-519102: (3.12665474s)
--- FAIL: TestRunningBinaryUpgrade (73.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (187.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.564313718.exe start -p missing-upgrade-424818 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.564313718.exe start -p missing-upgrade-424818 --memory=2200 --driver=docker  --container-runtime=crio: (2m22.516591333s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-424818
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-424818: (1.929481875s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-424818
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-424818 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-424818 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (38.963023818s)

                                                
                                                
-- stdout --
	* [missing-upgrade-424818] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-424818 in cluster missing-upgrade-424818
	* Pulling base image ...
	* docker "missing-upgrade-424818" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:09:34.446187 1568344 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:09:34.446455 1568344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:09:34.446467 1568344 out.go:309] Setting ErrFile to fd 2...
	I1108 00:09:34.446473 1568344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:09:34.446802 1568344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1108 00:09:34.447247 1568344 out.go:303] Setting JSON to false
	I1108 00:09:34.448518 1568344 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24724,"bootTime":1699377451,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1108 00:09:34.448604 1568344 start.go:138] virtualization:  
	I1108 00:09:34.453544 1568344 out.go:177] * [missing-upgrade-424818] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 00:09:34.455755 1568344 notify.go:220] Checking for updates...
	I1108 00:09:34.456499 1568344 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:09:34.458592 1568344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:09:34.460392 1568344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1108 00:09:34.462274 1568344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1108 00:09:34.464300 1568344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 00:09:34.466052 1568344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:09:34.468542 1568344 config.go:182] Loaded profile config "missing-upgrade-424818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:09:34.470880 1568344 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1108 00:09:34.472649 1568344 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:09:34.496484 1568344 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 00:09:34.496586 1568344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:09:34.584571 1568344 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-08 00:09:34.574402523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:09:34.584679 1568344 docker.go:295] overlay module found
	I1108 00:09:34.588060 1568344 out.go:177] * Using the docker driver based on existing profile
	I1108 00:09:34.590239 1568344 start.go:298] selected driver: docker
	I1108 00:09:34.590260 1568344 start.go:902] validating driver "docker" against &{Name:missing-upgrade-424818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-424818 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.115 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:09:34.590367 1568344 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:09:34.590988 1568344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:09:34.667623 1568344 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-08 00:09:34.653247789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:09:34.667951 1568344 cni.go:84] Creating CNI manager for ""
	I1108 00:09:34.667965 1568344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 00:09:34.667976 1568344 start_flags.go:323] config:
	{Name:missing-upgrade-424818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-424818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.115 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:09:34.670699 1568344 out.go:177] * Starting control plane node missing-upgrade-424818 in cluster missing-upgrade-424818
	I1108 00:09:34.673002 1568344 cache.go:121] Beginning downloading kic base image for docker with crio
	I1108 00:09:34.675288 1568344 out.go:177] * Pulling base image ...
	I1108 00:09:34.677007 1568344 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1108 00:09:34.677087 1568344 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1108 00:09:34.699232 1568344 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1108 00:09:34.699910 1568344 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1108 00:09:34.700358 1568344 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1108 00:09:34.775317 1568344 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1108 00:09:34.775477 1568344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/missing-upgrade-424818/config.json ...
	I1108 00:09:34.776726 1568344 cache.go:107] acquiring lock: {Name:mk9e35b0d2aaeaeda779068ba9df17902904c0e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.776741 1568344 cache.go:107] acquiring lock: {Name:mkcd7bd8164a58c24f54ac63c7eba76b9f8dc423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.776874 1568344 cache.go:107] acquiring lock: {Name:mk5f4678cde071ac291daca98ee4b8d032ff92bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.776903 1568344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1108 00:09:34.777040 1568344 cache.go:107] acquiring lock: {Name:mk056c2b8d4fa01baa4d27688b624c39a4b0d073 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.777063 1568344 cache.go:107] acquiring lock: {Name:mkfd477add759fc1236d874a521416746bbb0d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.777319 1568344 cache.go:107] acquiring lock: {Name:mk642d2c2ec64bded73e646a7e731120f64551b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.777351 1568344 cache.go:107] acquiring lock: {Name:mk42aa21c87376b51008fca60e54933db2e47906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.777497 1568344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1108 00:09:34.777689 1568344 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 00:09:34.777704 1568344 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 969.961µs
	I1108 00:09:34.777732 1568344 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 00:09:34.778048 1568344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1108 00:09:34.778071 1568344 cache.go:107] acquiring lock: {Name:mka6ad01a28295ccdde91d33e10b2426036c340e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:34.778560 1568344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1108 00:09:34.778748 1568344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1108 00:09:34.779119 1568344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1108 00:09:34.779473 1568344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1108 00:09:34.779720 1568344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1108 00:09:34.780051 1568344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1108 00:09:34.782142 1568344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1108 00:09:34.782292 1568344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1108 00:09:34.782719 1568344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1108 00:09:34.783180 1568344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1108 00:09:34.783359 1568344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1108 00:09:35.360367 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1108 00:09:35.372990 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1108 00:09:35.374322 1568344 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1108 00:09:35.374387 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W1108 00:09:35.391555 1568344 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1108 00:09:35.391660 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1108 00:09:35.392059 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W1108 00:09:35.394660 1568344 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1108 00:09:35.394730 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1108 00:09:35.432910 1568344 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1108 00:09:35.502081 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1108 00:09:35.502109 1568344 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 725.04765ms
	I1108 00:09:35.502123 1568344 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I1108 00:09:35.894483 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1108 00:09:35.894553 1568344 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.116956581s
	I1108 00:09:35.894648 1568344 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  513.36 KiB / 287.99 MiB [] 0.17% ? p/s ?I1108 00:09:36.094041 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1108 00:09:36.094095 1568344 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.316778993s
	I1108 00:09:36.094110 1568344 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  8.02 MiB / 287.99 MiB  2.78% 13.37 MiB p    > gcr.io/k8s-minikube/kicbase...:  17.50 MiB / 287.99 MiB  6.08% 13.37 MiB I1108 00:09:36.428111 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1108 00:09:36.428142 1568344 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.651269452s
	I1108 00:09:36.428172 1568344 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 13.37 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 14.43 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 14.43 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 14.43 MiB I1108 00:09:37.255603 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1108 00:09:37.255640 1568344 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.478595866s
	I1108 00:09:37.255693 1568344 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 13.50 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 13.50 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 13.50 MiB     > gcr.io/k8s-minikube/kicbase...:  32.75 MiB / 287.99 MiB  11.37% 13.37 MiBI1108 00:09:37.953949 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1108 00:09:37.954000 1568344 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 3.1772946s
	I1108 00:09:37.954014 1568344 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 13.37 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 13.37 MiB    > gcr.io/k8s-minikube/kicbase...:  47.46 MiB / 287.99 MiB  16.48% 14.08 MiB    > gcr.io/k8s-minikube/kicbase...:  59.54 MiB / 287.99 MiB  20.68% 14.08 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 14.08 MiB    > gcr.io/k8s-minikube/kicbase...:  68.39 MiB / 287.99 MiB  23.75% 15.43 MiB    > gcr.io/k8s-minikube/kicbase...:  77.45 MiB / 287.99 MiB  26.89% 15.43 MiB    > gcr.io/k8s-minikube/kicbase...:  90.39 MiB / 287.99 MiB  31.39% 15.43 MiB    > gcr.io/k8s-minikube/kicbase...:  98.23 MiB / 287.99 MiB  34.11% 17.64 MiB    > gcr.io/k8s-minikube/kicbase...:  109.54 MiB / 287.99 MiB  38.04% 17.64 Mi    > gcr.io/k8s-minikube/kicbase...:  120.75 MiB / 287.99 MiB  41.93% 17.64 Mi    > gcr.io/k8s-minikube/kicbase...:  131.79 MiB / 287.99 MiB  45.76% 20.11 Mi    > gcr.io/k8s-minikube/kicbase...:  143.62 MiB / 287.99 MiB  49.
87% 20.11 Mi    > gcr.io/k8s-minikube/kicbase...:  153.53 MiB / 287.99 MiB  53.31% 20.11 Mi    > gcr.io/k8s-minikube/kicbase...:  166.12 MiB / 287.99 MiB  57.68% 22.48 MiI1108 00:09:41.051777 1568344 cache.go:157] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1108 00:09:41.051803 1568344 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 6.274455288s
	I1108 00:09:41.051816 1568344 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1108 00:09:41.051830 1568344 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 22.48 Mi    > gcr.io/k8s-minikube/kicbase...:  176.73 MiB / 287.99 MiB  61.37% 22.48 Mi    > gcr.io/k8s-minikube/kicbase...:  189.79 MiB / 287.99 MiB  65.90% 23.59 Mi    > gcr.io/k8s-minikube/kicbase...:  200.12 MiB / 287.99 MiB  69.49% 23.59 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 23.59 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 24.21 Mi    > gcr.io/k8s-minikube/kicbase...:  218.73 MiB / 287.99 MiB  75.95% 24.21 Mi    > gcr.io/k8s-minikube/kicbase...:  225.68 MiB / 287.99 MiB  78.36% 24.21 Mi    > gcr.io/k8s-minikube/kicbase...:  236.17 MiB / 287.99 MiB  82.01% 25.50 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 25.50 Mi    > gcr.io/k8s-minikube/kicbase...:  243.95 MiB / 287.99 MiB  84.71% 25.50 Mi    > gcr.io/k8s-minikube/kicbase...:  255.23 MiB / 287.99 MiB  88.62% 25.90 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.
03% 25.90 Mi    > gcr.io/k8s-minikube/kicbase...:  265.40 MiB / 287.99 MiB  92.15% 25.90 Mi    > gcr.io/k8s-minikube/kicbase...:  275.26 MiB / 287.99 MiB  95.58% 26.38 Mi    > gcr.io/k8s-minikube/kicbase...:  281.05 MiB / 287.99 MiB  97.59% 26.38 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 26.38 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 26.05 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 26.05 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 26.05 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 24.37 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 24.37 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 24.37 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 22.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 22.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB
99.99% 22.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.32 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.32 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 21.32 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 19.95 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 19.95 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 19.95 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 18.66 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 23.71 MI1108 00:09:47.666845 1568344 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1108 00:09:47.666856 1568344 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1108 00:09:48.024584 1568344 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1108 00:09:48.024621 1568344 cache.go:194] Successfully downloaded all kic artifacts
	I1108 00:09:48.024672 1568344 start.go:365] acquiring machines lock for missing-upgrade-424818: {Name:mkbbaec6e355a62f9337dd0f7a6e79b35245d623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:09:48.024766 1568344 start.go:369] acquired machines lock for "missing-upgrade-424818" in 73.05µs
	I1108 00:09:48.024789 1568344 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:09:48.024797 1568344 fix.go:54] fixHost starting: 
	I1108 00:09:48.025086 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:48.056444 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:48.056515 1568344 fix.go:102] recreateIfNeeded on missing-upgrade-424818: state= err=unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:48.056532 1568344 fix.go:107] machineExists: false. err=machine does not exist
	I1108 00:09:48.059172 1568344 out.go:177] * docker "missing-upgrade-424818" container is missing, will recreate.
	I1108 00:09:48.061091 1568344 delete.go:124] DEMOLISHING missing-upgrade-424818 ...
	I1108 00:09:48.061213 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:48.091462 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	W1108 00:09:48.091531 1568344 stop.go:75] unable to get state: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:48.091548 1568344 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:48.091994 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:48.113661 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:48.113723 1568344 delete.go:82] Unable to get host status for missing-upgrade-424818, assuming it has already been deleted: state: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:48.113793 1568344 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-424818
	W1108 00:09:48.152714 1568344 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-424818 returned with exit code 1
	I1108 00:09:48.152753 1568344 kic.go:371] could not find the container missing-upgrade-424818 to remove it. will try anyways
	I1108 00:09:48.152832 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:48.181132 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	W1108 00:09:48.181189 1568344 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:48.181255 1568344 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-424818 /bin/bash -c "sudo init 0"
	W1108 00:09:48.205730 1568344 cli_runner.go:211] docker exec --privileged -t missing-upgrade-424818 /bin/bash -c "sudo init 0" returned with exit code 1
	I1108 00:09:48.205759 1568344 oci.go:650] error shutdown missing-upgrade-424818: docker exec --privileged -t missing-upgrade-424818 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:49.206000 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:49.224112 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:49.224189 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:49.224198 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:09:49.224226 1568344 retry.go:31] will retry after 293.781293ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:49.518772 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:49.540979 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:49.541039 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:49.541059 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:09:49.541087 1568344 retry.go:31] will retry after 961.214551ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:50.502537 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:50.526706 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:50.526761 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:50.526770 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:09:50.526794 1568344 retry.go:31] will retry after 1.293319157s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:51.820301 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:51.851307 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:51.851364 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:51.851373 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:09:51.851398 1568344 retry.go:31] will retry after 1.492460921s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:53.344113 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:53.360757 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:53.360818 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:53.360831 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:09:53.360868 1568344 retry.go:31] will retry after 2.708843193s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:56.070225 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:09:56.087823 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:09:56.087891 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:09:56.087904 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:09:56.087930 1568344 retry.go:31] will retry after 5.539458016s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:10:01.627656 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:10:01.645285 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:10:01.645377 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:10:01.645405 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:10:01.645433 1568344 retry.go:31] will retry after 4.307684169s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:10:05.953762 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:10:05.971567 1568344 cli_runner.go:211] docker container inspect missing-upgrade-424818 --format={{.State.Status}} returned with exit code 1
	I1108 00:10:05.971633 1568344 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	I1108 00:10:05.971646 1568344 oci.go:664] temporary error: container missing-upgrade-424818 status is  but expect it to be exited
	I1108 00:10:05.971679 1568344 oci.go:88] couldn't shut down missing-upgrade-424818 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-424818": docker container inspect missing-upgrade-424818 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424818
	 
	I1108 00:10:05.971741 1568344 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-424818
	I1108 00:10:05.988214 1568344 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-424818
	W1108 00:10:06.009380 1568344 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-424818 returned with exit code 1
	I1108 00:10:06.009483 1568344 cli_runner.go:164] Run: docker network inspect missing-upgrade-424818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 00:10:06.029228 1568344 cli_runner.go:164] Run: docker network rm missing-upgrade-424818
	I1108 00:10:06.152258 1568344 fix.go:114] Sleeping 1 second for extra luck!
	I1108 00:10:07.153072 1568344 start.go:125] createHost starting for "" (driver="docker")
	I1108 00:10:07.156504 1568344 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1108 00:10:07.156690 1568344 start.go:159] libmachine.API.Create for "missing-upgrade-424818" (driver="docker")
	I1108 00:10:07.156719 1568344 client.go:168] LocalClient.Create starting
	I1108 00:10:07.156799 1568344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem
	I1108 00:10:07.156845 1568344 main.go:141] libmachine: Decoding PEM data...
	I1108 00:10:07.156867 1568344 main.go:141] libmachine: Parsing certificate...
	I1108 00:10:07.156926 1568344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem
	I1108 00:10:07.156948 1568344 main.go:141] libmachine: Decoding PEM data...
	I1108 00:10:07.156959 1568344 main.go:141] libmachine: Parsing certificate...
	I1108 00:10:07.157229 1568344 cli_runner.go:164] Run: docker network inspect missing-upgrade-424818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 00:10:07.174430 1568344 cli_runner.go:211] docker network inspect missing-upgrade-424818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 00:10:07.174511 1568344 network_create.go:281] running [docker network inspect missing-upgrade-424818] to gather additional debugging logs...
	I1108 00:10:07.174532 1568344 cli_runner.go:164] Run: docker network inspect missing-upgrade-424818
	W1108 00:10:07.190948 1568344 cli_runner.go:211] docker network inspect missing-upgrade-424818 returned with exit code 1
	I1108 00:10:07.190985 1568344 network_create.go:284] error running [docker network inspect missing-upgrade-424818]: docker network inspect missing-upgrade-424818: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-424818 not found
	I1108 00:10:07.191009 1568344 network_create.go:286] output of [docker network inspect missing-upgrade-424818]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-424818 not found
	
	** /stderr **
	I1108 00:10:07.191109 1568344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 00:10:07.208042 1568344 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-45e1a0d37e35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:27:1e:3f:e9} reservation:<nil>}
	I1108 00:10:07.208358 1568344 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca275e1d612b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ed:f1:c6:f6} reservation:<nil>}
	I1108 00:10:07.208704 1568344 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3544da786f34 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:8e:ca:45:6e} reservation:<nil>}
	I1108 00:10:07.209192 1568344 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40036fc960}
	I1108 00:10:07.209212 1568344 network_create.go:124] attempt to create docker network missing-upgrade-424818 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 00:10:07.209267 1568344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-424818 missing-upgrade-424818
	I1108 00:10:07.286184 1568344 network_create.go:108] docker network missing-upgrade-424818 192.168.76.0/24 created
	I1108 00:10:07.286229 1568344 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-424818" container
	I1108 00:10:07.286304 1568344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 00:10:07.303041 1568344 cli_runner.go:164] Run: docker volume create missing-upgrade-424818 --label name.minikube.sigs.k8s.io=missing-upgrade-424818 --label created_by.minikube.sigs.k8s.io=true
	I1108 00:10:07.325672 1568344 oci.go:103] Successfully created a docker volume missing-upgrade-424818
	I1108 00:10:07.325758 1568344 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-424818-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-424818 --entrypoint /usr/bin/test -v missing-upgrade-424818:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1108 00:10:07.865818 1568344 oci.go:107] Successfully prepared a docker volume missing-upgrade-424818
	I1108 00:10:07.865857 1568344 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1108 00:10:07.866070 1568344 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 00:10:07.866287 1568344 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 00:10:07.946626 1568344 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-424818 --name missing-upgrade-424818 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-424818 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-424818 --network missing-upgrade-424818 --ip 192.168.76.2 --volume missing-upgrade-424818:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1108 00:10:08.369653 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Running}}
	I1108 00:10:08.389955 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	I1108 00:10:08.412249 1568344 cli_runner.go:164] Run: docker exec missing-upgrade-424818 stat /var/lib/dpkg/alternatives/iptables
	I1108 00:10:08.476279 1568344 oci.go:144] the created container "missing-upgrade-424818" has a running status.
	I1108 00:10:08.476311 1568344 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa...
	I1108 00:10:09.066039 1568344 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 00:10:09.090271 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	I1108 00:10:09.113549 1568344 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 00:10:09.113575 1568344 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-424818 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 00:10:09.194794 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	I1108 00:10:09.229214 1568344 machine.go:88] provisioning docker machine ...
	I1108 00:10:09.229245 1568344 ubuntu.go:169] provisioning hostname "missing-upgrade-424818"
	I1108 00:10:09.229321 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:09.249866 1568344 main.go:141] libmachine: Using SSH client type: native
	I1108 00:10:09.250327 1568344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1108 00:10:09.250347 1568344 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-424818 && echo "missing-upgrade-424818" | sudo tee /etc/hostname
	I1108 00:10:09.406740 1568344 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-424818
	
	I1108 00:10:09.406881 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:09.429858 1568344 main.go:141] libmachine: Using SSH client type: native
	I1108 00:10:09.430465 1568344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1108 00:10:09.430489 1568344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-424818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-424818/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-424818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:10:09.571171 1568344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:10:09.571199 1568344 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1108 00:10:09.571219 1568344 ubuntu.go:177] setting up certificates
	I1108 00:10:09.571229 1568344 provision.go:83] configureAuth start
	I1108 00:10:09.571300 1568344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424818
	I1108 00:10:09.597387 1568344 provision.go:138] copyHostCerts
	I1108 00:10:09.597456 1568344 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1108 00:10:09.597468 1568344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1108 00:10:09.597544 1568344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1108 00:10:09.597642 1568344 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1108 00:10:09.597653 1568344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1108 00:10:09.597683 1568344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1108 00:10:09.597746 1568344 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1108 00:10:09.597756 1568344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1108 00:10:09.597782 1568344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1108 00:10:09.597833 1568344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-424818 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-424818]
	I1108 00:10:09.898115 1568344 provision.go:172] copyRemoteCerts
	I1108 00:10:09.898231 1568344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:10:09.898318 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:09.917461 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:10.019936 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 00:10:10.046284 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:10:10.071010 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:10:10.096688 1568344 provision.go:86] duration metric: configureAuth took 525.439254ms
	I1108 00:10:10.096720 1568344 ubuntu.go:193] setting minikube options for container-runtime
	I1108 00:10:10.096927 1568344 config.go:182] Loaded profile config "missing-upgrade-424818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:10:10.097040 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:10.116806 1568344 main.go:141] libmachine: Using SSH client type: native
	I1108 00:10:10.117231 1568344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1108 00:10:10.117255 1568344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:10:10.558544 1568344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:10:10.558570 1568344 machine.go:91] provisioned docker machine in 1.32933419s
	I1108 00:10:10.558582 1568344 client.go:171] LocalClient.Create took 3.401850721s
	I1108 00:10:10.558598 1568344 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-424818" took 3.401906154s
	I1108 00:10:10.558607 1568344 start.go:300] post-start starting for "missing-upgrade-424818" (driver="docker")
	I1108 00:10:10.558620 1568344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:10:10.558693 1568344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:10:10.558739 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:10.577281 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:10.679338 1568344 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:10:10.683256 1568344 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1108 00:10:10.683295 1568344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 00:10:10.683312 1568344 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1108 00:10:10.683320 1568344 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1108 00:10:10.683333 1568344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1108 00:10:10.683397 1568344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1108 00:10:10.683491 1568344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1108 00:10:10.683602 1568344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:10:10.692347 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1108 00:10:10.715640 1568344 start.go:303] post-start completed in 157.013036ms
	I1108 00:10:10.716044 1568344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424818
	I1108 00:10:10.736267 1568344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/missing-upgrade-424818/config.json ...
	I1108 00:10:10.736554 1568344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 00:10:10.736609 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:10.754082 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:10.849108 1568344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 00:10:10.854773 1568344 start.go:128] duration metric: createHost completed in 3.701661306s
	I1108 00:10:10.854891 1568344 cli_runner.go:164] Run: docker container inspect missing-upgrade-424818 --format={{.State.Status}}
	W1108 00:10:10.872672 1568344 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:10:10.872702 1568344 machine.go:88] provisioning docker machine ...
	I1108 00:10:10.872721 1568344 ubuntu.go:169] provisioning hostname "missing-upgrade-424818"
	I1108 00:10:10.872809 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:10.890970 1568344 main.go:141] libmachine: Using SSH client type: native
	I1108 00:10:10.891384 1568344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1108 00:10:10.891404 1568344 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-424818 && echo "missing-upgrade-424818" | sudo tee /etc/hostname
	I1108 00:10:11.042990 1568344 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-424818
	
	I1108 00:10:11.043071 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:11.063533 1568344 main.go:141] libmachine: Using SSH client type: native
	I1108 00:10:11.063966 1568344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1108 00:10:11.063989 1568344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-424818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-424818/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-424818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:10:11.207340 1568344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:10:11.207369 1568344 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1108 00:10:11.207386 1568344 ubuntu.go:177] setting up certificates
	I1108 00:10:11.207396 1568344 provision.go:83] configureAuth start
	I1108 00:10:11.207458 1568344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424818
	I1108 00:10:11.225487 1568344 provision.go:138] copyHostCerts
	I1108 00:10:11.225554 1568344 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1108 00:10:11.225566 1568344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1108 00:10:11.225641 1568344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1108 00:10:11.225744 1568344 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1108 00:10:11.225752 1568344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1108 00:10:11.225780 1568344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1108 00:10:11.225839 1568344 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1108 00:10:11.225848 1568344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1108 00:10:11.225873 1568344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1108 00:10:11.225920 1568344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-424818 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-424818]
	I1108 00:10:11.421640 1568344 provision.go:172] copyRemoteCerts
	I1108 00:10:11.421713 1568344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:10:11.421753 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:11.440398 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:11.543414 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 00:10:11.566132 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:10:11.587959 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:10:11.610691 1568344 provision.go:86] duration metric: configureAuth took 403.28059ms
	I1108 00:10:11.610718 1568344 ubuntu.go:193] setting minikube options for container-runtime
	I1108 00:10:11.610908 1568344 config.go:182] Loaded profile config "missing-upgrade-424818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:10:11.611014 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:11.629239 1568344 main.go:141] libmachine: Using SSH client type: native
	I1108 00:10:11.629648 1568344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1108 00:10:11.629669 1568344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:10:11.921863 1568344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:10:11.921892 1568344 machine.go:91] provisioned docker machine in 1.049180894s
	I1108 00:10:11.921902 1568344 start.go:300] post-start starting for "missing-upgrade-424818" (driver="docker")
	I1108 00:10:11.921914 1568344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:10:11.921991 1568344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:10:11.922038 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:11.941612 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:12.044328 1568344 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:10:12.048407 1568344 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1108 00:10:12.048436 1568344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 00:10:12.048451 1568344 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1108 00:10:12.048459 1568344 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1108 00:10:12.048469 1568344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1108 00:10:12.048536 1568344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1108 00:10:12.048619 1568344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1108 00:10:12.048729 1568344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:10:12.061449 1568344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1108 00:10:12.103141 1568344 start.go:303] post-start completed in 181.222265ms
	I1108 00:10:12.103241 1568344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 00:10:12.103291 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:12.128198 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:12.232671 1568344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 00:10:12.241771 1568344 fix.go:56] fixHost completed within 24.216968265s
	I1108 00:10:12.241797 1568344 start.go:83] releasing machines lock for "missing-upgrade-424818", held for 24.217022451s
	I1108 00:10:12.241872 1568344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424818
	I1108 00:10:12.261283 1568344 ssh_runner.go:195] Run: cat /version.json
	I1108 00:10:12.261342 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:12.261562 1568344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:10:12.261620 1568344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424818
	I1108 00:10:12.298089 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	I1108 00:10:12.298873 1568344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/missing-upgrade-424818/id_rsa Username:docker}
	W1108 00:10:12.592934 1568344 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1108 00:10:12.593020 1568344 ssh_runner.go:195] Run: systemctl --version
	I1108 00:10:12.599005 1568344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:10:12.782316 1568344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1108 00:10:12.789292 1568344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:10:12.824550 1568344 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1108 00:10:12.824720 1568344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:10:12.858420 1568344 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:10:12.858481 1568344 start.go:472] detecting cgroup driver to use...
	I1108 00:10:12.858527 1568344 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1108 00:10:12.858615 1568344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:10:12.885195 1568344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:10:12.896273 1568344 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:10:12.896381 1568344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:10:12.908828 1568344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:10:12.920127 1568344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1108 00:10:12.934037 1568344 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1108 00:10:12.934110 1568344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:10:13.038894 1568344 docker.go:219] disabling docker service ...
	I1108 00:10:13.038965 1568344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:10:13.052132 1568344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:10:13.064235 1568344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:10:13.168425 1568344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:10:13.282616 1568344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:10:13.296090 1568344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:10:13.313713 1568344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1108 00:10:13.313836 1568344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:10:13.327281 1568344 out.go:177] 
	W1108 00:10:13.329293 1568344 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1108 00:10:13.329318 1568344 out.go:239] * 
	* 
	W1108 00:10:13.330328 1568344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:10:13.333230 1568344 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-424818 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-11-08 00:10:13.375181399 +0000 UTC m=+2451.132506223
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-424818
helpers_test.go:235: (dbg) docker inspect missing-upgrade-424818:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8ada78c701eee6fff69c482a8cb5cfce23de72600f9b4d463cfb473dcb6b011",
	        "Created": "2023-11-08T00:10:07.963190732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1569585,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-08T00:10:08.3607333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/f8ada78c701eee6fff69c482a8cb5cfce23de72600f9b4d463cfb473dcb6b011/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8ada78c701eee6fff69c482a8cb5cfce23de72600f9b4d463cfb473dcb6b011/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8ada78c701eee6fff69c482a8cb5cfce23de72600f9b4d463cfb473dcb6b011/hosts",
	        "LogPath": "/var/lib/docker/containers/f8ada78c701eee6fff69c482a8cb5cfce23de72600f9b4d463cfb473dcb6b011/f8ada78c701eee6fff69c482a8cb5cfce23de72600f9b4d463cfb473dcb6b011-json.log",
	        "Name": "/missing-upgrade-424818",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-424818:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-424818",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a923ac0208fd00de83bdb8e95fe1ec5679afbf69d7eaf6a560bbe67fa8c11d1-init/diff:/var/lib/docker/overlay2/fb46fc08507ef88cb98a75cbb8e51e2edbfbacd1ad3ee48437d8d02e51bbd584/diff:/var/lib/docker/overlay2/732bb3e2acd65f8c20a3fc625dc26b482ad7ff8de524d68c9f102cf81e373098/diff:/var/lib/docker/overlay2/c9331376376ceee7038a7dac57a989d5142cfd92db58b566dfb63a6a175637c1/diff:/var/lib/docker/overlay2/c92f97507d8ee1b06975047d65786f2a362dba27ce8859bd73fa660afda1db10/diff:/var/lib/docker/overlay2/e40d005b9d3279cbb3f1257982040a0583f8d9cffe7f2f112cf47b9526ce0177/diff:/var/lib/docker/overlay2/d073018fcd42cc537d6a4a2bdc69c606da8109180bf152c90336a61feef539bf/diff:/var/lib/docker/overlay2/d5de5d72558fc15914a457e10320ebdd06a062ccfac13a0f42125fc3c80e67da/diff:/var/lib/docker/overlay2/cc265c2fd0a1495c806e9eb888b53eb3b03206bdca7585d297673c480049af4c/diff:/var/lib/docker/overlay2/2e015019bfb131610e9fbdcc42c08ae3a4ddd5802d4a91686348f4763e4c5dab/diff:/var/lib/docker/overlay2/9450ca
cc74479ce78f9c195f64c0c3c91a7939a94f06aa1ffc11d3c1774f66d1/diff:/var/lib/docker/overlay2/7a46864b8a99cacbb42111850b3371b286d9ef251a5cb8f2770aabbdb0998ed6/diff:/var/lib/docker/overlay2/39f3aaf46b857714e0dd9f53615bdaf248fb0e4aa9aa3d08035333585e37bf05/diff:/var/lib/docker/overlay2/6b56ed8e52561a1ad8184c33bcee5a4673550cbb466b7a9068834bd1e1b42999/diff:/var/lib/docker/overlay2/f5620b211d23706e17e01ba1715ffbb32c6c4efc2294b8769dce8f718e630358/diff:/var/lib/docker/overlay2/8bd3c14252b9f65efc209ca6541b5421f5a3a53346804398f1de1b409eb26d4e/diff:/var/lib/docker/overlay2/907d2055b803c60c4ef70a456c95335756a56f02ba28133444af6d64dae614fe/diff:/var/lib/docker/overlay2/f8b344676b970831db0364310723d1133ce5cec6f434482eeefabdf0b1cac332/diff:/var/lib/docker/overlay2/690547549d65cabe6b801c8d1d9905c3fc92df1fb9810e0946fffa95ab0e83df/diff:/var/lib/docker/overlay2/6c03e6065bfe9b0dc5b49e7d942fa88fc3ec3de7a3184acb53aae33eee565ee5/diff:/var/lib/docker/overlay2/22af27f7cd4d1c7ac0e973c022cda56c965140610f2e64703419a43266e90605/diff:/var/lib/d
ocker/overlay2/1485e7a812566af688faa1de2e325be44f0428cd3814ce19f278ea21c6e3257b/diff:/var/lib/docker/overlay2/9ac29a2acf64f30caa079a4349fd6bfae37690976d6dc0c4fb2fcb1e15c180a4/diff:/var/lib/docker/overlay2/d2d4ac9e4c6dd14fae4d76bfe80b0fc00d599b1b8e983b4308afe83bc4e3d1df/diff:/var/lib/docker/overlay2/6e413040a59efd06e5797db74e35c52940d8dfe6763e370c73dc0f050623cf79/diff:/var/lib/docker/overlay2/8a252d786a6f2579d6da848d62d475c3695c9615d3f28b7c756192af89f2f85c/diff:/var/lib/docker/overlay2/b07b23e243d5fe1bd56988a797f49aed6cbe7651c4cd1ecbfbba0da095a98cef/diff:/var/lib/docker/overlay2/5d04e50dc6d326c6e56a3400790a3ba9dcbb082862a717af7053223a7e88c9e4/diff:/var/lib/docker/overlay2/68e2ec30cda386415a5b51ade2ee90ff907ec111874e95d4c86833eeb7687274/diff:/var/lib/docker/overlay2/23096aa2a71307fd8e8ca4f1b3aee468280aff47760fb12570b334b3df294196/diff:/var/lib/docker/overlay2/7c3c69d4d939641bb2474dde477b63120b3563a306d922460f59b5aacc9969b1/diff:/var/lib/docker/overlay2/aba8e8acd5337fa253eaf133d050a45f54516dd8d91f0fe5770d06752ec
6c623/diff:/var/lib/docker/overlay2/b7a85f0b1352d246d5a491920c31b2fec8c9b38e4bb56a54b41a84ac87a442be/diff:/var/lib/docker/overlay2/dc2db48f40a65b19720f772b9c160b4ebfdfcf11979484f041cdeb577d288c4a/diff:/var/lib/docker/overlay2/a5052bd5093008b3d7294f647eab7aacb6bc8502151192472cd0117f71cebc07/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a923ac0208fd00de83bdb8e95fe1ec5679afbf69d7eaf6a560bbe67fa8c11d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a923ac0208fd00de83bdb8e95fe1ec5679afbf69d7eaf6a560bbe67fa8c11d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a923ac0208fd00de83bdb8e95fe1ec5679afbf69d7eaf6a560bbe67fa8c11d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-424818",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-424818/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-424818",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-424818",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-424818",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ba15e7d78e12f92bdd51eb6285adc6f4578c46b0718b26239792bbf908317c2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6ba15e7d78e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-424818": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f8ada78c701e",
	                        "missing-upgrade-424818"
	                    ],
	                    "NetworkID": "d767d004a456101f5441c04fb5a84934a43ab6ebe69880d779e9318a1b37f7e1",
	                    "EndpointID": "efe1c501e86f60a258ca9fd9817ff0b9fb973d915b9dd2fe3cfe2b90744dbcce",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-424818 -n missing-upgrade-424818
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-424818 -n missing-upgrade-424818: exit status 6 (343.824556ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:10:13.723490 1570638 status.go:415] kubeconfig endpoint: got: 192.168.59.115:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-424818" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-424818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-424818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-424818: (1.932487706s)
--- FAIL: TestMissingContainerUpgrade (187.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (82.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3488402617.exe start -p stopped-upgrade-312173 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1108 00:10:59.147130 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1108 00:11:19.456028 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.3488402617.exe start -p stopped-upgrade-312173 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m13.887955636s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.3488402617.exe -p stopped-upgrade-312173 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.3488402617.exe -p stopped-upgrade-312173 stop: (2.207768592s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-312173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-312173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.496659892s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-312173] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-312173 in cluster stopped-upgrade-312173
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-312173" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:11:33.504160 1574999 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:11:33.504424 1574999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:11:33.504447 1574999 out.go:309] Setting ErrFile to fd 2...
	I1108 00:11:33.504467 1574999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:11:33.504787 1574999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1108 00:11:33.505211 1574999 out.go:303] Setting JSON to false
	I1108 00:11:33.506440 1574999 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24843,"bootTime":1699377451,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1108 00:11:33.506545 1574999 start.go:138] virtualization:  
	I1108 00:11:33.510239 1574999 out.go:177] * [stopped-upgrade-312173] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 00:11:33.512356 1574999 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1108 00:11:33.520981 1574999 notify.go:220] Checking for updates...
	I1108 00:11:33.523549 1574999 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:11:33.525440 1574999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:11:33.527279 1574999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1108 00:11:33.528898 1574999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1108 00:11:33.530901 1574999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 00:11:33.532443 1574999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:11:33.534775 1574999 config.go:182] Loaded profile config "stopped-upgrade-312173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:11:33.537175 1574999 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1108 00:11:33.538784 1574999 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:11:33.594522 1574999 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 00:11:33.594664 1574999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:11:33.762410 1574999 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-08 00:11:33.748446588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:11:33.762510 1574999 docker.go:295] overlay module found
	I1108 00:11:33.765538 1574999 out.go:177] * Using the docker driver based on existing profile
	I1108 00:11:33.767012 1574999 start.go:298] selected driver: docker
	I1108 00:11:33.767028 1574999 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-312173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-312173 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.110 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:11:33.767138 1574999 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:11:33.767758 1574999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:11:33.773349 1574999 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1108 00:11:33.891544 1574999 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-08 00:11:33.881906159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:11:33.891873 1574999 cni.go:84] Creating CNI manager for ""
	I1108 00:11:33.891892 1574999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 00:11:33.891906 1574999 start_flags.go:323] config:
	{Name:stopped-upgrade-312173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-312173 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.110 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:11:33.894623 1574999 out.go:177] * Starting control plane node stopped-upgrade-312173 in cluster stopped-upgrade-312173
	I1108 00:11:33.896510 1574999 cache.go:121] Beginning downloading kic base image for docker with crio
	I1108 00:11:33.898106 1574999 out.go:177] * Pulling base image ...
	I1108 00:11:33.899523 1574999 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1108 00:11:33.899696 1574999 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1108 00:11:33.918985 1574999 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1108 00:11:33.919015 1574999 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1108 00:11:33.985349 1574999 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1108 00:11:33.985532 1574999 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/stopped-upgrade-312173/config.json ...
	I1108 00:11:33.985812 1574999 cache.go:194] Successfully downloaded all kic artifacts
	I1108 00:11:33.985882 1574999 start.go:365] acquiring machines lock for stopped-upgrade-312173: {Name:mkf2bd30c81db8ddf9abf4cfed5b7dfc135d6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.985959 1574999 start.go:369] acquired machines lock for "stopped-upgrade-312173" in 34.305µs
	I1108 00:11:33.986014 1574999 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:11:33.986030 1574999 fix.go:54] fixHost starting: 
	I1108 00:11:33.986254 1574999 cache.go:107] acquiring lock: {Name:mkcd7bd8164a58c24f54ac63c7eba76b9f8dc423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986323 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 00:11:33.986335 1574999 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.151µs
	I1108 00:11:33.986358 1574999 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 00:11:33.986380 1574999 cache.go:107] acquiring lock: {Name:mk5f4678cde071ac291daca98ee4b8d032ff92bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986428 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1108 00:11:33.986443 1574999 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 67.544µs
	I1108 00:11:33.986450 1574999 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1108 00:11:33.986460 1574999 cache.go:107] acquiring lock: {Name:mk056c2b8d4fa01baa4d27688b624c39a4b0d073 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986508 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1108 00:11:33.986523 1574999 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 58.305µs
	I1108 00:11:33.986533 1574999 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1108 00:11:33.986549 1574999 cache.go:107] acquiring lock: {Name:mk642d2c2ec64bded73e646a7e731120f64551b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986584 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1108 00:11:33.986593 1574999 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 45.357µs
	I1108 00:11:33.986605 1574999 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1108 00:11:33.986625 1574999 cache.go:107] acquiring lock: {Name:mk9e35b0d2aaeaeda779068ba9df17902904c0e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986656 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1108 00:11:33.986666 1574999 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 52.701µs
	I1108 00:11:33.986673 1574999 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1108 00:11:33.986681 1574999 cache.go:107] acquiring lock: {Name:mkfd477add759fc1236d874a521416746bbb0d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986719 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1108 00:11:33.986728 1574999 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 47.351µs
	I1108 00:11:33.986736 1574999 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1108 00:11:33.986751 1574999 cache.go:107] acquiring lock: {Name:mk42aa21c87376b51008fca60e54933db2e47906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986784 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1108 00:11:33.986792 1574999 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 42.117µs
	I1108 00:11:33.986801 1574999 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1108 00:11:33.986810 1574999 cache.go:107] acquiring lock: {Name:mka6ad01a28295ccdde91d33e10b2426036c340e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:11:33.986851 1574999 cache.go:115] /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1108 00:11:33.986863 1574999 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 54.646µs
	I1108 00:11:33.986877 1574999 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1108 00:11:33.986887 1574999 cache.go:87] Successfully saved all images to host disk.
	I1108 00:11:33.987220 1574999 cli_runner.go:164] Run: docker container inspect stopped-upgrade-312173 --format={{.State.Status}}
	I1108 00:11:34.016917 1574999 fix.go:102] recreateIfNeeded on stopped-upgrade-312173: state=Stopped err=<nil>
	W1108 00:11:34.016950 1574999 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:11:34.022303 1574999 out.go:177] * Restarting existing docker container for "stopped-upgrade-312173" ...
	I1108 00:11:34.024276 1574999 cli_runner.go:164] Run: docker start stopped-upgrade-312173
	I1108 00:11:34.413576 1574999 cli_runner.go:164] Run: docker container inspect stopped-upgrade-312173 --format={{.State.Status}}
	I1108 00:11:34.449387 1574999 kic.go:430] container "stopped-upgrade-312173" state is running.
	I1108 00:11:34.451985 1574999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-312173
	I1108 00:11:34.471405 1574999 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/stopped-upgrade-312173/config.json ...
	I1108 00:11:34.471659 1574999 machine.go:88] provisioning docker machine ...
	I1108 00:11:34.471682 1574999 ubuntu.go:169] provisioning hostname "stopped-upgrade-312173"
	I1108 00:11:34.471742 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:34.493313 1574999 main.go:141] libmachine: Using SSH client type: native
	I1108 00:11:34.493781 1574999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34250 <nil> <nil>}
	I1108 00:11:34.493795 1574999 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-312173 && echo "stopped-upgrade-312173" | sudo tee /etc/hostname
	I1108 00:11:34.494443 1574999 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 00:11:37.654983 1574999 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-312173
	
	I1108 00:11:37.655108 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:37.674693 1574999 main.go:141] libmachine: Using SSH client type: native
	I1108 00:11:37.675112 1574999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34250 <nil> <nil>}
	I1108 00:11:37.675136 1574999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-312173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-312173/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-312173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:11:37.814958 1574999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:11:37.814997 1574999 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-1449649/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-1449649/.minikube}
	I1108 00:11:37.815021 1574999 ubuntu.go:177] setting up certificates
	I1108 00:11:37.815032 1574999 provision.go:83] configureAuth start
	I1108 00:11:37.815094 1574999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-312173
	I1108 00:11:37.833414 1574999 provision.go:138] copyHostCerts
	I1108 00:11:37.833492 1574999 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem, removing ...
	I1108 00:11:37.833515 1574999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem
	I1108 00:11:37.833593 1574999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.pem (1082 bytes)
	I1108 00:11:37.833730 1574999 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem, removing ...
	I1108 00:11:37.833742 1574999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem
	I1108 00:11:37.833774 1574999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/cert.pem (1123 bytes)
	I1108 00:11:37.833830 1574999 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem, removing ...
	I1108 00:11:37.833841 1574999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem
	I1108 00:11:37.833868 1574999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-1449649/.minikube/key.pem (1675 bytes)
	I1108 00:11:37.833918 1574999 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-312173 san=[192.168.59.110 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-312173]
	I1108 00:11:38.000308 1574999 provision.go:172] copyRemoteCerts
	I1108 00:11:38.000390 1574999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:11:38.000441 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:38.021832 1574999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/stopped-upgrade-312173/id_rsa Username:docker}
	I1108 00:11:38.124243 1574999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:11:38.149108 1574999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 00:11:38.172060 1574999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:11:38.195369 1574999 provision.go:86] duration metric: configureAuth took 380.318655ms
	I1108 00:11:38.195439 1574999 ubuntu.go:193] setting minikube options for container-runtime
	I1108 00:11:38.195661 1574999 config.go:182] Loaded profile config "stopped-upgrade-312173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1108 00:11:38.195774 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:38.214699 1574999 main.go:141] libmachine: Using SSH client type: native
	I1108 00:11:38.215113 1574999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 34250 <nil> <nil>}
	I1108 00:11:38.215136 1574999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:11:38.629443 1574999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:11:38.629469 1574999 machine.go:91] provisioned docker machine in 4.157790395s
	I1108 00:11:38.629480 1574999 start.go:300] post-start starting for "stopped-upgrade-312173" (driver="docker")
	I1108 00:11:38.629491 1574999 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:11:38.629559 1574999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:11:38.629606 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:38.647956 1574999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/stopped-upgrade-312173/id_rsa Username:docker}
	I1108 00:11:38.751480 1574999 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:11:38.755546 1574999 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1108 00:11:38.755575 1574999 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 00:11:38.755587 1574999 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1108 00:11:38.755594 1574999 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1108 00:11:38.755605 1574999 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/addons for local assets ...
	I1108 00:11:38.755661 1574999 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-1449649/.minikube/files for local assets ...
	I1108 00:11:38.755743 1574999 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem -> 14550192.pem in /etc/ssl/certs
	I1108 00:11:38.755854 1574999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:11:38.764610 1574999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/ssl/certs/14550192.pem --> /etc/ssl/certs/14550192.pem (1708 bytes)
	I1108 00:11:38.789038 1574999 start.go:303] post-start completed in 159.541465ms
	I1108 00:11:38.789125 1574999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 00:11:38.789174 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:38.807250 1574999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/stopped-upgrade-312173/id_rsa Username:docker}
	I1108 00:11:38.904137 1574999 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 00:11:38.909764 1574999 fix.go:56] fixHost completed within 4.923724945s
	I1108 00:11:38.909797 1574999 start.go:83] releasing machines lock for "stopped-upgrade-312173", held for 4.92381607s
	I1108 00:11:38.909882 1574999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-312173
	I1108 00:11:38.928162 1574999 ssh_runner.go:195] Run: cat /version.json
	I1108 00:11:38.928213 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:38.928229 1574999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:11:38.928287 1574999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-312173
	I1108 00:11:38.954543 1574999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/stopped-upgrade-312173/id_rsa Username:docker}
	I1108 00:11:38.959301 1574999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/stopped-upgrade-312173/id_rsa Username:docker}
	W1108 00:11:39.185741 1574999 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1108 00:11:39.185829 1574999 ssh_runner.go:195] Run: systemctl --version
	I1108 00:11:39.191178 1574999 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:11:39.362662 1574999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1108 00:11:39.368431 1574999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:11:39.391485 1574999 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1108 00:11:39.391580 1574999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:11:39.424237 1574999 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:11:39.424262 1574999 start.go:472] detecting cgroup driver to use...
	I1108 00:11:39.424296 1574999 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1108 00:11:39.424346 1574999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:11:39.454974 1574999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:11:39.466656 1574999 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:11:39.466759 1574999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:11:39.478560 1574999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:11:39.490964 1574999 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1108 00:11:39.503340 1574999 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1108 00:11:39.503438 1574999 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:11:39.602602 1574999 docker.go:219] disabling docker service ...
	I1108 00:11:39.602687 1574999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:11:39.616552 1574999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:11:39.629042 1574999 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:11:39.724816 1574999 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:11:39.833633 1574999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:11:39.847731 1574999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:11:39.865531 1574999 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1108 00:11:39.865600 1574999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:11:39.878646 1574999 out.go:177] 
	W1108 00:11:39.880319 1574999 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1108 00:11:39.880337 1574999 out.go:239] * 
	* 
	W1108 00:11:39.881282 1574999 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:11:39.883325 1574999 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-312173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (82.59s)

                                                
                                    

Test pass (272/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.9
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.3/json-events 13.56
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.11
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.66
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.13
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.13
25 TestAddons/Setup 173.02
27 TestAddons/parallel/Registry 15.73
29 TestAddons/parallel/InspektorGadget 10.96
30 TestAddons/parallel/MetricsServer 5.87
33 TestAddons/parallel/CSI 41.46
34 TestAddons/parallel/Headlamp 14.26
35 TestAddons/parallel/CloudSpanner 5.62
36 TestAddons/parallel/LocalPath 55.28
37 TestAddons/parallel/NvidiaDevicePlugin 5.65
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/StoppedEnableDisable 12.44
42 TestCertOptions 39.51
43 TestCertExpiration 257.1
45 TestForceSystemdFlag 40.91
46 TestForceSystemdEnv 47.01
52 TestErrorSpam/setup 31.63
53 TestErrorSpam/start 0.91
54 TestErrorSpam/status 1.13
55 TestErrorSpam/pause 1.88
56 TestErrorSpam/unpause 2.01
57 TestErrorSpam/stop 1.53
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 77.49
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 43.97
64 TestFunctional/serial/KubeContext 0.07
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.89
69 TestFunctional/serial/CacheCmd/cache/add_local 1.55
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.29
74 TestFunctional/serial/CacheCmd/cache/delete 0.16
75 TestFunctional/serial/MinikubeKubectlCmd 0.17
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
77 TestFunctional/serial/ExtraConfig 34.22
78 TestFunctional/serial/ComponentHealth 0.1
79 TestFunctional/serial/LogsCmd 1.94
80 TestFunctional/serial/LogsFileCmd 1.98
81 TestFunctional/serial/InvalidService 4.61
83 TestFunctional/parallel/ConfigCmd 0.66
84 TestFunctional/parallel/DashboardCmd 9.84
85 TestFunctional/parallel/DryRun 0.52
86 TestFunctional/parallel/InternationalLanguage 0.23
87 TestFunctional/parallel/StatusCmd 1.41
91 TestFunctional/parallel/ServiceCmdConnect 12.77
92 TestFunctional/parallel/AddonsCmd 0.27
93 TestFunctional/parallel/PersistentVolumeClaim 27.59
95 TestFunctional/parallel/SSHCmd 0.94
96 TestFunctional/parallel/CpCmd 1.58
98 TestFunctional/parallel/FileSync 0.44
99 TestFunctional/parallel/CertSync 3.11
103 TestFunctional/parallel/NodeLabels 0.1
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.46
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
121 TestFunctional/parallel/ProfileCmd/profile_list 0.43
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
123 TestFunctional/parallel/MountCmd/any-port 9.72
124 TestFunctional/parallel/ServiceCmd/List 0.59
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
127 TestFunctional/parallel/ServiceCmd/Format 0.43
128 TestFunctional/parallel/ServiceCmd/URL 0.44
129 TestFunctional/parallel/MountCmd/specific-port 2.23
130 TestFunctional/parallel/MountCmd/VerifyCleanup 3.1
131 TestFunctional/parallel/Version/short 0.09
132 TestFunctional/parallel/Version/components 1.11
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
137 TestFunctional/parallel/ImageCommands/ImageBuild 3.41
138 TestFunctional/parallel/ImageCommands/Setup 2.5
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.53
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.92
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.05
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.92
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.36
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
149 TestFunctional/delete_addon-resizer_images 0.08
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 99.86
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.67
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.75
162 TestJSONOutput/start/Command 79.16
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.87
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.74
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.87
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.27
187 TestKicCustomNetwork/create_custom_network 42.09
188 TestKicCustomNetwork/use_default_bridge_network 34.77
189 TestKicExistingNetwork 36.37
190 TestKicCustomSubnet 36.76
191 TestKicStaticIP 33.76
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 73.94
196 TestMountStart/serial/StartWithMountFirst 7.44
197 TestMountStart/serial/VerifyMountFirst 0.3
198 TestMountStart/serial/StartWithMountSecond 7.11
199 TestMountStart/serial/VerifyMountSecond 0.29
200 TestMountStart/serial/DeleteFirst 1.69
201 TestMountStart/serial/VerifyMountPostDelete 0.28
202 TestMountStart/serial/Stop 1.24
203 TestMountStart/serial/RestartStopped 8.09
204 TestMountStart/serial/VerifyMountPostStop 0.29
207 TestMultiNode/serial/FreshStart2Nodes 127.53
208 TestMultiNode/serial/DeployApp2Nodes 5.85
210 TestMultiNode/serial/AddNode 50.7
211 TestMultiNode/serial/ProfileList 0.37
212 TestMultiNode/serial/CopyFile 11.44
213 TestMultiNode/serial/StopNode 2.42
214 TestMultiNode/serial/StartAfterStop 12.35
215 TestMultiNode/serial/RestartKeepsNodes 122.43
216 TestMultiNode/serial/DeleteNode 5.21
217 TestMultiNode/serial/StopMultiNode 24.13
218 TestMultiNode/serial/RestartMultiNode 85.35
219 TestMultiNode/serial/ValidateNameConflict 39.74
224 TestPreload 180.05
226 TestScheduledStopUnix 108.86
229 TestInsufficientStorage 14.25
232 TestKubernetesUpgrade 384.82
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 41.18
237 TestNoKubernetes/serial/StartWithStopK8s 9.6
238 TestNoKubernetes/serial/Start 10.11
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.49
240 TestNoKubernetes/serial/ProfileList 1.12
241 TestNoKubernetes/serial/Stop 1.3
242 TestNoKubernetes/serial/StartNoArgs 8.01
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
244 TestStoppedBinaryUpgrade/Setup 1.62
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
255 TestPause/serial/Start 80.65
256 TestPause/serial/SecondStartNoReconfiguration 43.19
257 TestPause/serial/Pause 1.1
258 TestPause/serial/VerifyStatus 0.53
259 TestPause/serial/Unpause 0.99
260 TestPause/serial/PauseAgain 1.78
261 TestPause/serial/DeletePaused 3.08
262 TestPause/serial/VerifyDeletedResources 0.48
270 TestNetworkPlugins/group/false 6.29
275 TestStartStop/group/old-k8s-version/serial/FirstStart 121.67
276 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
278 TestStartStop/group/old-k8s-version/serial/Stop 12.24
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
280 TestStartStop/group/old-k8s-version/serial/SecondStart 426.22
282 TestStartStop/group/no-preload/serial/FirstStart 67.76
283 TestStartStop/group/no-preload/serial/DeployApp 10.47
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
285 TestStartStop/group/no-preload/serial/Stop 12.12
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
287 TestStartStop/group/no-preload/serial/SecondStart 632.14
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
291 TestStartStop/group/old-k8s-version/serial/Pause 4.45
293 TestStartStop/group/embed-certs/serial/FirstStart 81.85
294 TestStartStop/group/embed-certs/serial/DeployApp 10.51
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
296 TestStartStop/group/embed-certs/serial/Stop 12.13
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
298 TestStartStop/group/embed-certs/serial/SecondStart 638.22
299 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
300 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
301 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
302 TestStartStop/group/no-preload/serial/Pause 3.8
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.44
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.48
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.6
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.23
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 608.18
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.2
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.6
313 TestStartStop/group/embed-certs/serial/Pause 4.7
315 TestStartStop/group/newest-cni/serial/FirstStart 46.54
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.33
318 TestStartStop/group/newest-cni/serial/Stop 1.5
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
320 TestStartStop/group/newest-cni/serial/SecondStart 30.14
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
324 TestStartStop/group/newest-cni/serial/Pause 3.33
325 TestNetworkPlugins/group/auto/Start 54.52
326 TestNetworkPlugins/group/auto/KubeletFlags 0.35
327 TestNetworkPlugins/group/auto/NetCatPod 10.34
328 TestNetworkPlugins/group/auto/DNS 0.21
329 TestNetworkPlugins/group/auto/Localhost 0.19
330 TestNetworkPlugins/group/auto/HairPin 0.24
331 TestNetworkPlugins/group/flannel/Start 66.18
332 TestNetworkPlugins/group/flannel/ControllerPod 5.05
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
334 TestNetworkPlugins/group/flannel/NetCatPod 11.33
335 TestNetworkPlugins/group/flannel/DNS 0.28
336 TestNetworkPlugins/group/flannel/Localhost 0.2
337 TestNetworkPlugins/group/flannel/HairPin 0.22
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.52
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.72
342 TestNetworkPlugins/group/calico/Start 78.63
343 TestNetworkPlugins/group/custom-flannel/Start 74.49
344 TestNetworkPlugins/group/calico/ControllerPod 5.05
345 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
346 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
347 TestNetworkPlugins/group/calico/KubeletFlags 0.38
348 TestNetworkPlugins/group/calico/NetCatPod 12.41
349 TestNetworkPlugins/group/custom-flannel/DNS 0.37
350 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
351 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
352 TestNetworkPlugins/group/calico/DNS 0.24
353 TestNetworkPlugins/group/calico/Localhost 0.2
354 TestNetworkPlugins/group/calico/HairPin 0.2
355 TestNetworkPlugins/group/kindnet/Start 91.07
356 TestNetworkPlugins/group/bridge/Start 97.7
357 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
359 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
360 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
361 TestNetworkPlugins/group/bridge/NetCatPod 11.69
362 TestNetworkPlugins/group/kindnet/DNS 0.23
363 TestNetworkPlugins/group/kindnet/Localhost 0.18
364 TestNetworkPlugins/group/kindnet/HairPin 0.2
365 TestNetworkPlugins/group/bridge/DNS 0.22
366 TestNetworkPlugins/group/bridge/Localhost 0.19
367 TestNetworkPlugins/group/bridge/HairPin 0.19
368 TestNetworkPlugins/group/enable-default-cni/Start 83.47
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.37
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (24.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-073722 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-073722 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (24.899771475s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-073722
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-073722: exit status 85 (100.728119ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-073722 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |          |
	|         | -p download-only-073722        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:29:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:29:22.368182 1455024 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:29:22.368348 1455024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:29:22.368355 1455024 out.go:309] Setting ErrFile to fd 2...
	I1107 23:29:22.368362 1455024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:29:22.368624 1455024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	W1107 23:29:22.368782 1455024 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-1449649/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-1449649/.minikube/config/config.json: no such file or directory
	I1107 23:29:22.369176 1455024 out.go:303] Setting JSON to true
	I1107 23:29:22.370246 1455024 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22312,"bootTime":1699377451,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:29:22.370330 1455024 start.go:138] virtualization:  
	I1107 23:29:22.373428 1455024 out.go:97] [download-only-073722] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:29:22.375244 1455024 out.go:169] MINIKUBE_LOCATION=17585
	W1107 23:29:22.373706 1455024 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 23:29:22.373771 1455024 notify.go:220] Checking for updates...
	I1107 23:29:22.378597 1455024 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:29:22.380722 1455024 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:29:22.382342 1455024 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:29:22.384197 1455024 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1107 23:29:22.387607 1455024 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:29:22.387954 1455024 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:29:22.417644 1455024 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:29:22.417749 1455024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:29:22.503493 1455024 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-07 23:29:22.493334506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:29:22.503605 1455024 docker.go:295] overlay module found
	I1107 23:29:22.505301 1455024 out.go:97] Using the docker driver based on user configuration
	I1107 23:29:22.505326 1455024 start.go:298] selected driver: docker
	I1107 23:29:22.505333 1455024 start.go:902] validating driver "docker" against <nil>
	I1107 23:29:22.505458 1455024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:29:22.571653 1455024 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-07 23:29:22.561964836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:29:22.571806 1455024 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:29:22.572074 1455024 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1107 23:29:22.572225 1455024 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 23:29:22.574180 1455024 out.go:169] Using Docker driver with root privileges
	I1107 23:29:22.576205 1455024 cni.go:84] Creating CNI manager for ""
	I1107 23:29:22.576237 1455024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:29:22.576251 1455024 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:29:22.576275 1455024 start_flags.go:323] config:
	{Name:download-only-073722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-073722 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:29:22.577946 1455024 out.go:97] Starting control plane node download-only-073722 in cluster download-only-073722
	I1107 23:29:22.578063 1455024 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:29:22.579615 1455024 out.go:97] Pulling base image ...
	I1107 23:29:22.579650 1455024 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:29:22.579814 1455024 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:29:22.598641 1455024 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:29:22.598841 1455024 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:29:22.598944 1455024 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:29:22.653340 1455024 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1107 23:29:22.653365 1455024 cache.go:56] Caching tarball of preloaded images
	I1107 23:29:22.654094 1455024 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:29:22.656170 1455024 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 23:29:22.656195 1455024 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:29:22.830174 1455024 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1107 23:29:32.609129 1455024 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:29:32.609862 1455024 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:29:33.617135 1455024 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1107 23:29:33.617529 1455024 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/download-only-073722/config.json ...
	I1107 23:29:33.617563 1455024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/download-only-073722/config.json: {Name:mka587eb15894dfe4427c754f9475251694f4872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:29:33.618240 1455024 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:29:33.618455 1455024 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-073722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (13.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-073722 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-073722 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.555196761s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (13.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-073722
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-073722: exit status 85 (106.19081ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-073722 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |          |
	|         | -p download-only-073722        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-073722 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |          |
	|         | -p download-only-073722        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:29:47
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:29:47.364684 1455098 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:29:47.364898 1455098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:29:47.364925 1455098 out.go:309] Setting ErrFile to fd 2...
	I1107 23:29:47.364943 1455098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:29:47.365216 1455098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	W1107 23:29:47.365368 1455098 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-1449649/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-1449649/.minikube/config/config.json: no such file or directory
	I1107 23:29:47.365663 1455098 out.go:303] Setting JSON to true
	I1107 23:29:47.366728 1455098 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22337,"bootTime":1699377451,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:29:47.366826 1455098 start.go:138] virtualization:  
	I1107 23:29:47.369993 1455098 out.go:97] [download-only-073722] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:29:47.370336 1455098 notify.go:220] Checking for updates...
	I1107 23:29:47.375218 1455098 out.go:169] MINIKUBE_LOCATION=17585
	I1107 23:29:47.378035 1455098 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:29:47.380607 1455098 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:29:47.383324 1455098 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:29:47.386169 1455098 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1107 23:29:47.391288 1455098 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:29:47.391826 1455098 config.go:182] Loaded profile config "download-only-073722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1107 23:29:47.391910 1455098 start.go:810] api.Load failed for download-only-073722: filestore "download-only-073722": Docker machine "download-only-073722" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:29:47.392012 1455098 driver.go:378] Setting default libvirt URI to qemu:///system
	W1107 23:29:47.392040 1455098 start.go:810] api.Load failed for download-only-073722: filestore "download-only-073722": Docker machine "download-only-073722" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:29:47.415822 1455098 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:29:47.415917 1455098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:29:47.499104 1455098 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:29:47.488967359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:29:47.499203 1455098 docker.go:295] overlay module found
	I1107 23:29:47.502264 1455098 out.go:97] Using the docker driver based on existing profile
	I1107 23:29:47.502298 1455098 start.go:298] selected driver: docker
	I1107 23:29:47.502306 1455098 start.go:902] validating driver "docker" against &{Name:download-only-073722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-073722 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:29:47.502476 1455098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:29:47.567056 1455098 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:29:47.557557197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:29:47.567560 1455098 cni.go:84] Creating CNI manager for ""
	I1107 23:29:47.567578 1455098 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:29:47.567590 1455098 start_flags.go:323] config:
	{Name:download-only-073722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-073722 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:29:47.570316 1455098 out.go:97] Starting control plane node download-only-073722 in cluster download-only-073722
	I1107 23:29:47.570347 1455098 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:29:47.572943 1455098 out.go:97] Pulling base image ...
	I1107 23:29:47.572966 1455098 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:29:47.573128 1455098 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:29:47.590219 1455098 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:29:47.590362 1455098 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:29:47.590381 1455098 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 23:29:47.590386 1455098 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 23:29:47.590394 1455098 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:29:47.657307 1455098 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1107 23:29:47.657348 1455098 cache.go:56] Caching tarball of preloaded images
	I1107 23:29:47.657530 1455098 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:29:47.660364 1455098 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1107 23:29:47.660393 1455098 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1107 23:29:47.811232 1455098 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/17585-1449649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-073722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-073722
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-523429 --alsologtostderr --binary-mirror http://127.0.0.1:43481 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-523429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-523429
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.13s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-862145
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-862145: exit status 85 (125.583234ms)

                                                
                                                
-- stdout --
	* Profile "addons-862145" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-862145"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.13s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.13s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-862145
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-862145: exit status 85 (133.871724ms)

                                                
                                                
-- stdout --
	* Profile "addons-862145" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-862145"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.13s)

                                                
                                    
x
+
TestAddons/Setup (173.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-862145 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-862145 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m53.023127471s)
--- PASS: TestAddons/Setup (173.02s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 61.944224ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qmpph" [b847a662-a103-451a-bab8-36206ce3090e] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016033124s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-szzdm" [c1a5f788-6327-49f4-8217-bf0ad8166124] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.026124226s
addons_test.go:339: (dbg) Run:  kubectl --context addons-862145 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-862145 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-862145 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.502637031s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 ip
2023/11/07 23:33:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5mx2z" [3996dd04-fccf-4054-8046-f41a05c8c5cf] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013154518s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-862145
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-862145: (5.944534863s)
--- PASS: TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 6.534207ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dcc2j" [1ba7f677-6e2b-446c-849f-3ac9b4119c36] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019159961s
addons_test.go:414: (dbg) Run:  kubectl --context addons-862145 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 5.311671ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-862145 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-862145 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a9b71270-4c45-461e-b305-b4e12dfdf3ce] Pending
helpers_test.go:344: "task-pv-pod" [a9b71270-4c45-461e-b305-b4e12dfdf3ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a9b71270-4c45-461e-b305-b4e12dfdf3ce] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.023142701s
addons_test.go:583: (dbg) Run:  kubectl --context addons-862145 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-862145 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-862145 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-862145 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-862145 delete pod task-pv-pod: (1.015653276s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-862145 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-862145 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-862145 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d2e25908-2c0a-44b4-8f5d-c1d364e9a558] Pending
helpers_test.go:344: "task-pv-pod-restore" [d2e25908-2c0a-44b4-8f5d-c1d364e9a558] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d2e25908-2c0a-44b4-8f5d-c1d364e9a558] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.029765251s
addons_test.go:625: (dbg) Run:  kubectl --context addons-862145 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-862145 delete pod task-pv-pod-restore: (1.063076194s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-862145 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-862145 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-862145 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.907924855s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-862145 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-862145 --alsologtostderr -v=1: (1.228392887s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-bvwfw" [59e5b3d6-dc93-416f-9888-7106ea82d124] Pending
helpers_test.go:344: "headlamp-94b766c-bvwfw" [59e5b3d6-dc93-416f-9888-7106ea82d124] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-bvwfw" [59e5b3d6-dc93-416f-9888-7106ea82d124] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-bvwfw" [59e5b3d6-dc93-416f-9888-7106ea82d124] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.029695633s
--- PASS: TestAddons/parallel/Headlamp (14.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-wn6zw" [db9187ad-f771-49d2-a7fb-9706c300d8f5] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.017637444s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-862145
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-862145 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-862145 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [37aa659c-9095-4a19-85a7-8b456667595a] Pending
helpers_test.go:344: "test-local-path" [37aa659c-9095-4a19-85a7-8b456667595a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [37aa659c-9095-4a19-85a7-8b456667595a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [37aa659c-9095-4a19-85a7-8b456667595a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.011377129s
addons_test.go:890: (dbg) Run:  kubectl --context addons-862145 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 ssh "cat /opt/local-path-provisioner/pvc-1bd0fe32-732d-473f-9e2b-8ba652c5c557_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-862145 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-862145 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-862145 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-862145 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.111805603s)
--- PASS: TestAddons/parallel/LocalPath (55.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2mxvg" [af169897-3b36-43a5-87a8-fead1e07bc56] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.07666046s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-862145
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-862145 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-862145 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-862145
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-862145: (12.097408769s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-862145
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-862145
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-862145
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (39.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-717605 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1108 00:16:19.456463 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-717605 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.68589142s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-717605 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-717605 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-717605 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-717605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-717605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-717605: (2.066912637s)
--- PASS: TestCertOptions (39.51s)

                                                
                                    
x
+
TestCertExpiration (257.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-537738 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-537738 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.711438691s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-537738 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-537738 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.625484747s)
helpers_test.go:175: Cleaning up "cert-expiration-537738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-537738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-537738: (2.767209597s)
--- PASS: TestCertExpiration (257.10s)

                                                
                                    
x
+
TestForceSystemdFlag (40.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-185654 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-185654 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.819048249s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-185654 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-185654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-185654
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-185654: (2.67074659s)
--- PASS: TestForceSystemdFlag (40.91s)

                                                
                                    
x
+
TestForceSystemdEnv (47.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-159566 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-159566 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.472000095s)
helpers_test.go:175: Cleaning up "force-systemd-env-159566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-159566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-159566: (2.539317281s)
--- PASS: TestForceSystemdEnv (47.01s)

                                                
                                    
x
+
TestErrorSpam/setup (31.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-991960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991960 --driver=docker  --container-runtime=crio
E1107 23:37:56.101707 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.108512 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.118757 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.139016 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.179298 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.259569 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.419937 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:56.740440 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:57.380922 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:37:58.661125 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:38:01.221350 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:38:06.342446 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-991960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991960 --driver=docker  --container-runtime=crio: (31.627086351s)
--- PASS: TestErrorSpam/setup (31.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 pause
--- PASS: TestErrorSpam/pause (1.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 unpause
E1107 23:38:16.583041 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 unpause
--- PASS: TestErrorSpam/unpause (2.01s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 stop: (1.289051073s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991960 --log_dir /tmp/nospam-991960 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17585-1449649/.minikube/files/etc/test/nested/copy/1455019/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421985 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1107 23:38:37.063610 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:39:18.023844 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-421985 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.494553322s)
--- PASS: TestFunctional/serial/StartWithProxy (77.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421985 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-421985 --alsologtostderr -v=8: (43.968205049s)
functional_test.go:659: soft start took 43.96872808s for "functional-421985" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-421985 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 cache add registry.k8s.io/pause:3.1: (1.687599682s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 cache add registry.k8s.io/pause:3.3: (1.596927984s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 cache add registry.k8s.io/pause:latest: (1.606401872s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-421985 /tmp/TestFunctionalserialCacheCmdcacheadd_local514468797/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cache add minikube-local-cache-test:functional-421985
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 cache add minikube-local-cache-test:functional-421985: (1.012393871s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cache delete minikube-local-cache-test:functional-421985
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-421985
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (335.944296ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 cache reload: (1.186751408s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 kubectl -- --context functional-421985 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-421985 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421985 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 23:40:39.944983 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-421985 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.224265759s)
functional_test.go:757: restart took 34.224399164s for "functional-421985" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-421985 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 logs: (1.938235576s)
--- PASS: TestFunctional/serial/LogsCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 logs --file /tmp/TestFunctionalserialLogsFileCmd1386377501/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 logs --file /tmp/TestFunctionalserialLogsFileCmd1386377501/001/logs.txt: (1.97751178s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-421985 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-421985
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-421985: exit status 115 (710.891871ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32608 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-421985 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 config get cpus: exit status 14 (105.306071ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 config get cpus: exit status 14 (100.955078ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-421985 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-421985 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1481070: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-421985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (225.603722ms)

                                                
                                                
-- stdout --
	* [functional-421985] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:41:54.972150 1480733 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:41:54.972327 1480733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:41:54.972358 1480733 out.go:309] Setting ErrFile to fd 2...
	I1107 23:41:54.972378 1480733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:41:54.972661 1480733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:41:54.973048 1480733 out.go:303] Setting JSON to false
	I1107 23:41:54.974287 1480733 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23064,"bootTime":1699377451,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:41:54.974392 1480733 start.go:138] virtualization:  
	I1107 23:41:54.977835 1480733 out.go:177] * [functional-421985] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:41:54.979672 1480733 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:41:54.979758 1480733 notify.go:220] Checking for updates...
	I1107 23:41:54.981444 1480733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:41:54.983319 1480733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:41:54.984872 1480733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:41:54.986728 1480733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:41:54.988587 1480733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:41:54.990581 1480733 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:41:54.991100 1480733 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:41:55.022230 1480733 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:41:55.022364 1480733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:41:55.119632 1480733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-07 23:41:55.109033036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:41:55.119733 1480733 docker.go:295] overlay module found
	I1107 23:41:55.121682 1480733 out.go:177] * Using the docker driver based on existing profile
	I1107 23:41:55.123688 1480733 start.go:298] selected driver: docker
	I1107 23:41:55.123708 1480733 start.go:902] validating driver "docker" against &{Name:functional-421985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-421985 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:41:55.123831 1480733 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:41:55.126085 1480733 out.go:177] 
	W1107 23:41:55.127900 1480733 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 23:41:55.129599 1480733 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421985 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-421985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (226.742387ms)

                                                
                                                
-- stdout --
	* [functional-421985] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:41:54.763273 1480693 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:41:54.763446 1480693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:41:54.763457 1480693 out.go:309] Setting ErrFile to fd 2...
	I1107 23:41:54.763464 1480693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:41:54.763823 1480693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:41:54.764182 1480693 out.go:303] Setting JSON to false
	I1107 23:41:54.765449 1480693 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23064,"bootTime":1699377451,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1107 23:41:54.765528 1480693 start.go:138] virtualization:  
	I1107 23:41:54.767783 1480693 out.go:177] * [functional-421985] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1107 23:41:54.769933 1480693 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:41:54.771603 1480693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:41:54.770020 1480693 notify.go:220] Checking for updates...
	I1107 23:41:54.775256 1480693 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1107 23:41:54.777241 1480693 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1107 23:41:54.779062 1480693 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:41:54.780782 1480693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:41:54.782932 1480693 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:41:54.783535 1480693 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:41:54.807636 1480693 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:41:54.807752 1480693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:41:54.893940 1480693 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-07 23:41:54.883873289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:41:54.894065 1480693 docker.go:295] overlay module found
	I1107 23:41:54.897345 1480693 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 23:41:54.898827 1480693 start.go:298] selected driver: docker
	I1107 23:41:54.898845 1480693 start.go:902] validating driver "docker" against &{Name:functional-421985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-421985 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:41:54.898956 1480693 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:41:54.901050 1480693 out.go:177] 
	W1107 23:41:54.902924 1480693 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 23:41:54.904526 1480693 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-421985 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-421985 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-5csmj" [a9afddcd-eaed-47e1-b5a7-32809c05709c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-5csmj" [a9afddcd-eaed-47e1-b5a7-32809c05709c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.024603962s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30449
functional_test.go:1674: http://192.168.49.2:30449: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-5csmj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30449
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [070c62e9-a5cf-4548-a6dd-f22bd44208f6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.046741518s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-421985 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-421985 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-421985 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-421985 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-421985 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca9fa618-f19e-40ff-bba9-878e864a59b0] Pending
helpers_test.go:344: "sp-pod" [ca9fa618-f19e-40ff-bba9-878e864a59b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca9fa618-f19e-40ff-bba9-878e864a59b0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.023369957s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-421985 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-421985 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-421985 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [876f0ae9-1f4d-4d49-bee1-a5346d4999f2] Pending
helpers_test.go:344: "sp-pod" [876f0ae9-1f4d-4d49-bee1-a5346d4999f2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [876f0ae9-1f4d-4d49-bee1-a5346d4999f2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.018634903s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-421985 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh -n functional-421985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 cp functional-421985:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3544511709/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh -n functional-421985 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1455019/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /etc/test/nested/copy/1455019/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1455019.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /etc/ssl/certs/1455019.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1455019.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /usr/share/ca-certificates/1455019.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14550192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /etc/ssl/certs/14550192.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14550192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /usr/share/ca-certificates/14550192.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-421985 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh "sudo systemctl is-active docker": exit status 1 (384.332447ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh "sudo systemctl is-active containerd": exit status 1 (412.185069ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421985 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421985 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-421985 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1478730: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-421985 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421985 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-421985 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0defd1e2-a718-4286-ab53-77984d63d081] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0defd1e2-a718-4286-ab53-77984d63d081] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.019468215s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-421985 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.106.46 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-421985 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-421985 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-421985 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-w7hch" [1eda89ab-7599-4bc9-bd3c-458f1e5b8e78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-w7hch" [1eda89ab-7599-4bc9-bd3c-458f1e5b8e78] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.016844343s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "347.023057ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "84.53549ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "362.420805ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "77.082361ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdany-port1280940940/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699400508375295717" to /tmp/TestFunctionalparallelMountCmdany-port1280940940/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699400508375295717" to /tmp/TestFunctionalparallelMountCmdany-port1280940940/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699400508375295717" to /tmp/TestFunctionalparallelMountCmdany-port1280940940/001/test-1699400508375295717
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (455.274573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 23:41 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 23:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 23:41 test-1699400508375295717
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh cat /mount-9p/test-1699400508375295717
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-421985 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [45e008fb-b52b-492c-b586-b416e864f5e4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [45e008fb-b52b-492c-b586-b416e864f5e4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [45e008fb-b52b-492c-b586-b416e864f5e4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.02231822s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-421985 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdany-port1280940940/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 service list -o json
functional_test.go:1493: Took "545.095673ms" to run "out/minikube-linux-arm64 -p functional-421985 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31976
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31976
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdspecific-port1459280957/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (406.963984ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdspecific-port1459280957/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh "sudo umount -f /mount-9p": exit status 1 (486.772912ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-421985 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdspecific-port1459280957/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup478336178/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup478336178/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup478336178/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T" /mount1: exit status 1 (1.296910729s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-421985 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup478336178/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup478336178/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup478336178/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 version -o=json --components: (1.110239967s)
--- PASS: TestFunctional/parallel/Version/components (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421985 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-421985
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421985 image ls --format short --alsologtostderr:
I1107 23:42:25.475789 1483438 out.go:296] Setting OutFile to fd 1 ...
I1107 23:42:25.476030 1483438 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.476051 1483438 out.go:309] Setting ErrFile to fd 2...
I1107 23:42:25.476071 1483438 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.476355 1483438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
I1107 23:42:25.477029 1483438 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.477214 1483438 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.477736 1483438 cli_runner.go:164] Run: docker container inspect functional-421985 --format={{.State.Status}}
I1107 23:42:25.497570 1483438 ssh_runner.go:195] Run: systemctl --version
I1107 23:42:25.497623 1483438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421985
I1107 23:42:25.517533 1483438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34078 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/functional-421985/id_rsa Username:docker}
I1107 23:42:25.607856 1483438 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421985 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-421985  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 537e9a59ee2fd | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 8276439b4f237 | 117MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.28.3            | a5dd5cdd6d3ef | 69.9MB |
| docker.io/library/nginx                 | alpine             | aae348c9fbd40 | 50.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 42a4e73724daa | 59.2MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | latest             | 81be380254394 | 196MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421985 image ls --format table --alsologtostderr:
I1107 23:42:25.821637 1483497 out.go:296] Setting OutFile to fd 1 ...
I1107 23:42:25.821879 1483497 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.821903 1483497 out.go:309] Setting ErrFile to fd 2...
I1107 23:42:25.821922 1483497 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.822287 1483497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
I1107 23:42:25.823030 1483497 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.823226 1483497 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.823736 1483497 cli_runner.go:164] Run: docker container inspect functional-421985 --format={{.State.Status}}
I1107 23:42:25.846755 1483497 ssh_runner.go:195] Run: systemctl --version
I1107 23:42:25.846807 1483497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421985
I1107 23:42:25.880721 1483497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34078 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/functional-421985/id_rsa Username:docker}
I1107 23:42:25.974801 1483497 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421985 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"117252916"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],
"size":"60867618"},{"id":"81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b","repoDigests":["docker.io/library/nginx@sha256:565211f0ec2c97f4118c0c1b6be7f1c7775c0b3d44c2bb72bd32983a5696aa6a","docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6"],"repoTags":["docker.io/library/nginx:latest"],"size":"196211468"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3
415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50212152"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"59188020"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5",
"repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc
192f4e20606b57419ce9e2e0c1588f960b483","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"69926807"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"121054158"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.
k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-421985"],"size":"34114467"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0
b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421985 image ls --format json --alsologtostderr:
I1107 23:42:25.779109 1483493 out.go:296] Setting OutFile to fd 1 ...
I1107 23:42:25.779323 1483493 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.779332 1483493 out.go:309] Setting ErrFile to fd 2...
I1107 23:42:25.779339 1483493 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.779658 1483493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
I1107 23:42:25.780372 1483493 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.780569 1483493 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.781148 1483493 cli_runner.go:164] Run: docker container inspect functional-421985 --format={{.State.Status}}
I1107 23:42:25.811167 1483493 ssh_runner.go:195] Run: systemctl --version
I1107 23:42:25.811222 1483493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421985
I1107 23:42:25.837886 1483493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34078 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/functional-421985/id_rsa Username:docker}
I1107 23:42:25.932541 1483493 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421985 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "50212152"
- id: 81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b
repoDigests:
- docker.io/library/nginx@sha256:565211f0ec2c97f4118c0c1b6be7f1c7775c0b3d44c2bb72bd32983a5696aa6a
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
repoTags:
- docker.io/library/nginx:latest
size: "196211468"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "59188020"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-421985
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "121054158"
- id: 8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "117252916"
- id: a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "69926807"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421985 image ls --format yaml --alsologtostderr:
I1107 23:42:25.473528 1483437 out.go:296] Setting OutFile to fd 1 ...
I1107 23:42:25.473748 1483437 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.473760 1483437 out.go:309] Setting ErrFile to fd 2...
I1107 23:42:25.473767 1483437 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:25.474111 1483437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
I1107 23:42:25.474991 1483437 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.475184 1483437 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:25.475951 1483437 cli_runner.go:164] Run: docker container inspect functional-421985 --format={{.State.Status}}
I1107 23:42:25.501931 1483437 ssh_runner.go:195] Run: systemctl --version
I1107 23:42:25.502006 1483437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421985
I1107 23:42:25.527290 1483437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34078 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/functional-421985/id_rsa Username:docker}
I1107 23:42:25.630327 1483437 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421985 ssh pgrep buildkitd: exit status 1 (353.934048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image build -t localhost/my-image:functional-421985 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 image build -t localhost/my-image:functional-421985 testdata/build --alsologtostderr: (2.790611997s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421985 image build -t localhost/my-image:functional-421985 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 585c12bfab6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-421985
--> 5adeca896f6
Successfully tagged localhost/my-image:functional-421985
5adeca896f64810d5aba98a2ca9115cfebdecf7fa242ab41032f7587c9e5e665
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421985 image build -t localhost/my-image:functional-421985 testdata/build --alsologtostderr:
I1107 23:42:26.427131 1483597 out.go:296] Setting OutFile to fd 1 ...
I1107 23:42:26.427843 1483597 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:26.427861 1483597 out.go:309] Setting ErrFile to fd 2...
I1107 23:42:26.427869 1483597 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:42:26.428184 1483597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
I1107 23:42:26.428914 1483597 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:26.429650 1483597 config.go:182] Loaded profile config "functional-421985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:42:26.430309 1483597 cli_runner.go:164] Run: docker container inspect functional-421985 --format={{.State.Status}}
I1107 23:42:26.449589 1483597 ssh_runner.go:195] Run: systemctl --version
I1107 23:42:26.449651 1483597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421985
I1107 23:42:26.469075 1483597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34078 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/functional-421985/id_rsa Username:docker}
I1107 23:42:26.560648 1483597 build_images.go:151] Building image from path: /tmp/build.1560388515.tar
I1107 23:42:26.560721 1483597 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1107 23:42:26.572546 1483597 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1560388515.tar
I1107 23:42:26.577517 1483597 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1560388515.tar: stat -c "%s %y" /var/lib/minikube/build/build.1560388515.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1560388515.tar': No such file or directory
I1107 23:42:26.577549 1483597 ssh_runner.go:362] scp /tmp/build.1560388515.tar --> /var/lib/minikube/build/build.1560388515.tar (3072 bytes)
I1107 23:42:26.609942 1483597 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1560388515
I1107 23:42:26.622148 1483597 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1560388515 -xf /var/lib/minikube/build/build.1560388515.tar
I1107 23:42:26.634622 1483597 crio.go:297] Building image: /var/lib/minikube/build/build.1560388515
I1107 23:42:26.634702 1483597 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-421985 /var/lib/minikube/build/build.1560388515 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1107 23:42:29.100346 1483597 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-421985 /var/lib/minikube/build/build.1560388515 --cgroup-manager=cgroupfs: (2.465609463s)
I1107 23:42:29.100430 1483597 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1560388515
I1107 23:42:29.111641 1483597 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1560388515.tar
I1107 23:42:29.122560 1483597 build_images.go:207] Built localhost/my-image:functional-421985 from /tmp/build.1560388515.tar
I1107 23:42:29.122592 1483597 build_images.go:123] succeeded building to: functional-421985
I1107 23:42:29.122598 1483597 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/11/07 23:42:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.476135005s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-421985
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image load --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 image load --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr: (5.190977792s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image load --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 image load --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr: (2.665044093s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.136941292s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-421985
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image load --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 image load --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr: (3.636918444s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image save gcr.io/google-containers/addon-resizer:functional-421985 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image rm gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-421985 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.088828095s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-421985
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-421985 image save --daemon gcr.io/google-containers/addon-resizer:functional-421985 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-421985
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-421985
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-421985
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-421985
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-878254 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1107 23:42:56.101735 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:43:23.785217 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-878254 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m39.859261054s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.86s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons enable ingress --alsologtostderr -v=5: (16.672698847s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-878254 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-101800 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1107 23:47:41.380047 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:47:56.102085 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-101800 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m19.153906991s)
--- PASS: TestJSONOutput/start/Command (79.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-101800 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-101800 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-101800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-101800 --output=json --user=testUser: (5.871042644s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-974481 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-974481 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.892215ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e7b3c53-d065-45f0-9a67-3e6b9364c60a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-974481] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d4f1704-76f0-483f-b055-9970a79d9988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"fe61c544-7b84-4bbc-a542-544fb7c26414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79da465b-e32b-4b6f-8b10-96e889e3688d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig"}}
	{"specversion":"1.0","id":"497afece-d0cc-4df0-a503-93666d83f743","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube"}}
	{"specversion":"1.0","id":"a799bb2e-ca42-4c18-aac3-851f3d7b547b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a842df71-d62f-434c-b39e-df12b75cd49e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c06e9bcf-9152-4723-8710-bec1174bde10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-974481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-974481
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-609066 --network=
E1107 23:49:03.300250 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1107 23:49:29.655677 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:29.660966 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:29.671215 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:29.691384 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:29.731629 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:29.811922 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:29.972349 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:30.292958 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:30.933835 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:32.214404 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:49:34.775472 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-609066 --network=: (39.933181057s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-609066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-609066
E1107 23:49:39.895854 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-609066: (2.136713432s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-208083 --network=bridge
E1107 23:49:50.136596 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:50:10.616857 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-208083 --network=bridge: (32.716889643s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-208083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-208083
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-208083: (2.018088919s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.77s)

                                                
                                    
x
+
TestKicExistingNetwork (36.37s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-113121 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-113121 --network=existing-network: (34.121798266s)
helpers_test.go:175: Cleaning up "existing-network-113121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-113121
E1107 23:50:51.577107 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-113121: (2.091089013s)
--- PASS: TestKicExistingNetwork (36.37s)

                                                
                                    
x
+
TestKicCustomSubnet (36.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-326978 --subnet=192.168.60.0/24
E1107 23:51:19.456225 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-326978 --subnet=192.168.60.0/24: (34.624922525s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-326978 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-326978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-326978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-326978: (2.113647901s)
--- PASS: TestKicCustomSubnet (36.76s)

                                                
                                    
x
+
TestKicStaticIP (33.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-563781 --static-ip=192.168.200.200
E1107 23:51:47.140511 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-563781 --static-ip=192.168.200.200: (31.456636341s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-563781 ip
helpers_test.go:175: Cleaning up "static-ip-563781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-563781
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-563781: (2.13448132s)
--- PASS: TestKicStaticIP (33.76s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (73.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-150888 --driver=docker  --container-runtime=crio
E1107 23:52:13.498082 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-150888 --driver=docker  --container-runtime=crio: (31.947752216s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-153973 --driver=docker  --container-runtime=crio
E1107 23:52:56.102152 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-153973 --driver=docker  --container-runtime=crio: (36.560378456s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-150888
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-153973
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-153973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-153973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-153973: (2.076247067s)
helpers_test.go:175: Cleaning up "first-150888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-150888
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-150888: (2.006011925s)
--- PASS: TestMinikubeProfile (73.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-624879 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-624879 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.436214727s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-624879 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-626667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-626667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.107570497s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-626667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-624879 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-624879 --alsologtostderr -v=5: (1.694454481s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-626667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-626667
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-626667: (1.238202674s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-626667
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-626667: (7.086382198s)
--- PASS: TestMountStart/serial/RestartStopped (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-626667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-898977 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1107 23:54:19.146285 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1107 23:54:29.656068 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1107 23:54:57.338704 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-898977 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m6.972432758s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-898977 -- rollout status deployment/busybox: (3.530481634s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-f95qf -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-xprzg -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-f95qf -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-xprzg -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-f95qf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-898977 -- exec busybox-5bc68d56bd-xprzg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-898977 -v 3 --alsologtostderr
E1107 23:56:19.456484 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-898977 -v 3 --alsologtostderr: (49.967957093s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.70s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp testdata/cp-test.txt multinode-898977:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1155627405/001/cp-test_multinode-898977.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977:/home/docker/cp-test.txt multinode-898977-m02:/home/docker/cp-test_multinode-898977_multinode-898977-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m02 "sudo cat /home/docker/cp-test_multinode-898977_multinode-898977-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977:/home/docker/cp-test.txt multinode-898977-m03:/home/docker/cp-test_multinode-898977_multinode-898977-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m03 "sudo cat /home/docker/cp-test_multinode-898977_multinode-898977-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp testdata/cp-test.txt multinode-898977-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1155627405/001/cp-test_multinode-898977-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977-m02:/home/docker/cp-test.txt multinode-898977:/home/docker/cp-test_multinode-898977-m02_multinode-898977.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977 "sudo cat /home/docker/cp-test_multinode-898977-m02_multinode-898977.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977-m02:/home/docker/cp-test.txt multinode-898977-m03:/home/docker/cp-test_multinode-898977-m02_multinode-898977-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m03 "sudo cat /home/docker/cp-test_multinode-898977-m02_multinode-898977-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp testdata/cp-test.txt multinode-898977-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1155627405/001/cp-test_multinode-898977-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977-m03:/home/docker/cp-test.txt multinode-898977:/home/docker/cp-test_multinode-898977-m03_multinode-898977.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977 "sudo cat /home/docker/cp-test_multinode-898977-m03_multinode-898977.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 cp multinode-898977-m03:/home/docker/cp-test.txt multinode-898977-m02:/home/docker/cp-test_multinode-898977-m03_multinode-898977-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 ssh -n multinode-898977-m02 "sudo cat /home/docker/cp-test_multinode-898977-m03_multinode-898977-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-898977 node stop m03: (1.270865651s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-898977 status: exit status 7 (580.899552ms)

                                                
                                                
-- stdout --
	multinode-898977
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-898977-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-898977-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr: exit status 7 (571.974604ms)

                                                
                                                
-- stdout --
	multinode-898977
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-898977-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-898977-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:57:08.765646 1530180 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:57:08.765882 1530180 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:57:08.765912 1530180 out.go:309] Setting ErrFile to fd 2...
	I1107 23:57:08.765933 1530180 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:57:08.766257 1530180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:57:08.766472 1530180 out.go:303] Setting JSON to false
	I1107 23:57:08.766643 1530180 notify.go:220] Checking for updates...
	I1107 23:57:08.767851 1530180 mustload.go:65] Loading cluster: multinode-898977
	I1107 23:57:08.770149 1530180 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:57:08.770201 1530180 status.go:255] checking status of multinode-898977 ...
	I1107 23:57:08.770731 1530180 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:57:08.790203 1530180 status.go:330] multinode-898977 host status = "Running" (err=<nil>)
	I1107 23:57:08.790241 1530180 host.go:66] Checking if "multinode-898977" exists ...
	I1107 23:57:08.790542 1530180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977
	I1107 23:57:08.816501 1530180 host.go:66] Checking if "multinode-898977" exists ...
	I1107 23:57:08.816875 1530180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:57:08.816931 1530180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977
	I1107 23:57:08.839521 1530180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977/id_rsa Username:docker}
	I1107 23:57:08.928842 1530180 ssh_runner.go:195] Run: systemctl --version
	I1107 23:57:08.934650 1530180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:57:08.948636 1530180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:57:09.021027 1530180 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-07 23:57:09.009956041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:57:09.021658 1530180 kubeconfig.go:92] found "multinode-898977" server: "https://192.168.58.2:8443"
	I1107 23:57:09.021687 1530180 api_server.go:166] Checking apiserver status ...
	I1107 23:57:09.021734 1530180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:57:09.035438 1530180 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup
	I1107 23:57:09.047929 1530180 api_server.go:182] apiserver freezer: "13:freezer:/docker/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/crio/crio-4cf9bcc8241440b9b845e70b6ab0eef1a2e526aaf87c8491a20d76ea24d6baf1"
	I1107 23:57:09.047999 1530180 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8776ce48ea1a9219a3c66d557ff062aebdb329b3f5c03a3056eb2163e0705517/crio/crio-4cf9bcc8241440b9b845e70b6ab0eef1a2e526aaf87c8491a20d76ea24d6baf1/freezer.state
	I1107 23:57:09.059827 1530180 api_server.go:204] freezer state: "THAWED"
	I1107 23:57:09.059859 1530180 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 23:57:09.069271 1530180 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 23:57:09.069303 1530180 status.go:421] multinode-898977 apiserver status = Running (err=<nil>)
	I1107 23:57:09.069321 1530180 status.go:257] multinode-898977 status: &{Name:multinode-898977 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:57:09.069365 1530180 status.go:255] checking status of multinode-898977-m02 ...
	I1107 23:57:09.069764 1530180 cli_runner.go:164] Run: docker container inspect multinode-898977-m02 --format={{.State.Status}}
	I1107 23:57:09.098730 1530180 status.go:330] multinode-898977-m02 host status = "Running" (err=<nil>)
	I1107 23:57:09.098756 1530180 host.go:66] Checking if "multinode-898977-m02" exists ...
	I1107 23:57:09.099075 1530180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-898977-m02
	I1107 23:57:09.117238 1530180 host.go:66] Checking if "multinode-898977-m02" exists ...
	I1107 23:57:09.117550 1530180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:57:09.117591 1530180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-898977-m02
	I1107 23:57:09.137055 1530180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/17585-1449649/.minikube/machines/multinode-898977-m02/id_rsa Username:docker}
	I1107 23:57:09.228462 1530180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:57:09.242260 1530180 status.go:257] multinode-898977-m02 status: &{Name:multinode-898977-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:57:09.242295 1530180 status.go:255] checking status of multinode-898977-m03 ...
	I1107 23:57:09.242644 1530180 cli_runner.go:164] Run: docker container inspect multinode-898977-m03 --format={{.State.Status}}
	I1107 23:57:09.261045 1530180 status.go:330] multinode-898977-m03 host status = "Stopped" (err=<nil>)
	I1107 23:57:09.261069 1530180 status.go:343] host is not running, skipping remaining checks
	I1107 23:57:09.261083 1530180 status.go:257] multinode-898977-m03 status: &{Name:multinode-898977-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-898977 node start m03 --alsologtostderr: (11.519459479s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-898977
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-898977
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-898977: (25.114884738s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-898977 --wait=true -v=8 --alsologtostderr
E1107 23:57:56.102688 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-898977 --wait=true -v=8 --alsologtostderr: (1m37.120965575s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-898977
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-898977 node delete m03: (4.452933584s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 stop
E1107 23:59:29.655910 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-898977 stop: (23.905197339s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-898977 status: exit status 7 (113.437328ms)

                                                
                                                
-- stdout --
	multinode-898977
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-898977-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr: exit status 7 (112.397625ms)

                                                
                                                
-- stdout --
	multinode-898977
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-898977-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:59:53.351972 1538253 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:59:53.352220 1538253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:59:53.352250 1538253 out.go:309] Setting ErrFile to fd 2...
	I1107 23:59:53.352271 1538253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:59:53.352586 1538253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1107 23:59:53.352812 1538253 out.go:303] Setting JSON to false
	I1107 23:59:53.352965 1538253 mustload.go:65] Loading cluster: multinode-898977
	I1107 23:59:53.352965 1538253 notify.go:220] Checking for updates...
	I1107 23:59:53.353458 1538253 config.go:182] Loaded profile config "multinode-898977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:59:53.353482 1538253 status.go:255] checking status of multinode-898977 ...
	I1107 23:59:53.354050 1538253 cli_runner.go:164] Run: docker container inspect multinode-898977 --format={{.State.Status}}
	I1107 23:59:53.373409 1538253 status.go:330] multinode-898977 host status = "Stopped" (err=<nil>)
	I1107 23:59:53.373447 1538253 status.go:343] host is not running, skipping remaining checks
	I1107 23:59:53.373456 1538253 status.go:257] multinode-898977 status: &{Name:multinode-898977 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:59:53.373486 1538253 status.go:255] checking status of multinode-898977-m02 ...
	I1107 23:59:53.373796 1538253 cli_runner.go:164] Run: docker container inspect multinode-898977-m02 --format={{.State.Status}}
	I1107 23:59:53.393605 1538253 status.go:330] multinode-898977-m02 host status = "Stopped" (err=<nil>)
	I1107 23:59:53.393628 1538253 status.go:343] host is not running, skipping remaining checks
	I1107 23:59:53.393636 1538253 status.go:257] multinode-898977-m02 status: &{Name:multinode-898977-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-898977 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-898977 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.599925802s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-898977 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-898977
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-898977-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-898977-m02 --driver=docker  --container-runtime=crio: exit status 14 (107.629395ms)

                                                
                                                
-- stdout --
	* [multinode-898977-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-898977-m02' is duplicated with machine name 'multinode-898977-m02' in profile 'multinode-898977'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-898977-m03 --driver=docker  --container-runtime=crio
E1108 00:01:19.456586 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-898977-m03 --driver=docker  --container-runtime=crio: (36.989730807s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-898977
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-898977: exit status 80 (412.145188ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-898977
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-898977-m03 already exists in multinode-898977-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-898977-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-898977-m03: (2.154830031s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.74s)

                                                
                                    
x
+
TestPreload (180.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-087126 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1108 00:02:42.501680 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:02:56.101713 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-087126 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.102540902s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-087126 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-087126 image pull gcr.io/k8s-minikube/busybox: (2.759107987s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-087126
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-087126: (5.925267546s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-087126 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1108 00:04:29.656321 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-087126 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m17.57790014s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-087126 image list
helpers_test.go:175: Cleaning up "test-preload-087126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-087126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-087126: (2.415472514s)
--- PASS: TestPreload (180.05s)

                                                
                                    
x
+
TestScheduledStopUnix (108.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-893771 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-893771 --memory=2048 --driver=docker  --container-runtime=crio: (32.3005166s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-893771 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-893771 -n scheduled-stop-893771
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-893771 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-893771 --cancel-scheduled
E1108 00:05:52.698948 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-893771 -n scheduled-stop-893771
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-893771
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-893771 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1108 00:06:19.457032 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-893771
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-893771: exit status 7 (89.773094ms)

                                                
                                                
-- stdout --
	scheduled-stop-893771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-893771 -n scheduled-stop-893771
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-893771 -n scheduled-stop-893771: exit status 7 (86.466579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-893771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-893771
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-893771: (4.739860767s)
--- PASS: TestScheduledStopUnix (108.86s)

                                                
                                    
x
+
TestInsufficientStorage (14.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-708027 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-708027 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.650217972s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"351d0690-ae39-4df4-bf83-fc9aa9c75597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-708027] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0976450-62dd-4421-905d-f0364e858156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"c9f2c73a-ceea-40ee-82ae-3e295e694b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c039cdda-a37b-49ae-bab0-65efd852308e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig"}}
	{"specversion":"1.0","id":"b692e2ab-0cef-4958-8e09-00ae144e57fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube"}}
	{"specversion":"1.0","id":"8d7aabec-df54-4dd1-a016-2a6f192ab809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1148b620-1bab-4eff-8ad6-d93a33f294cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"968dd14c-9259-42fc-b01f-a0f2ba075b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"61e85c89-9304-42e1-a3f1-fdbfd38dbf24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"80b1127f-4b26-4518-a52d-535132b67b59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4549502-cd24-41ac-897b-008b83595fa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bd35687b-86dc-462f-b5bf-9dbc1eb8bbfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-708027 in cluster insufficient-storage-708027","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3e52325-0488-4682-ad46-81ad3a666e08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f52b91d5-1bdc-42a4-88aa-917a4a211395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9528cb97-395b-43d2-9d98-994abdc28f3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-708027 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-708027 --output=json --layout=cluster: exit status 7 (325.560185ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-708027","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-708027","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:07:06.279982 1555083 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-708027" does not appear in /home/jenkins/minikube-integration/17585-1449649/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-708027 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-708027 --output=json --layout=cluster: exit status 7 (326.591053ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-708027","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-708027","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:07:06.607937 1555136 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-708027" does not appear in /home/jenkins/minikube-integration/17585-1449649/kubeconfig
	E1108 00:07:06.620232 1555136 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/insufficient-storage-708027/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-708027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-708027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-708027: (1.947361293s)
--- PASS: TestInsufficientStorage (14.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.102855749s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-699575
E1108 00:09:29.656190 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-699575: (1.786680248s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-699575 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-699575 status --format={{.Host}}: exit status 7 (94.933411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m42.518823724s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-699575 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (99.130144ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-699575] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-699575
	    minikube start -p kubernetes-upgrade-699575 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6995752 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-699575 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-699575 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.361292219s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-699575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-699575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-699575: (2.74450355s)
--- PASS: TestKubernetesUpgrade (384.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-743569 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-743569 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (94.876198ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-743569] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-743569 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-743569 --driver=docker  --container-runtime=crio: (40.691478985s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-743569 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-743569 --no-kubernetes --driver=docker  --container-runtime=crio
E1108 00:07:56.102421 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-743569 --no-kubernetes --driver=docker  --container-runtime=crio: (6.810093357s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-743569 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-743569 status -o json: exit status 2 (579.341328ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-743569","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-743569
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-743569: (2.207911701s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-743569 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-743569 --no-kubernetes --driver=docker  --container-runtime=crio: (10.111025605s)
--- PASS: TestNoKubernetes/serial/Start (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-743569 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-743569 "sudo systemctl is-active --quiet service kubelet": exit status 1 (484.721255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-743569
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-743569: (1.299030311s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-743569 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-743569 --driver=docker  --container-runtime=crio: (8.00973362s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-743569 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-743569 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.711812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-312173
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestPause/serial/Start (80.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-018865 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-018865 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.645563564s)
--- PASS: TestPause/serial/Start (80.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-018865 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1108 00:14:29.655852 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-018865 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.1501234s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.19s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-018865 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-018865 --alsologtostderr -v=5: (1.097116787s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-018865 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-018865 --output=json --layout=cluster: exit status 2 (528.84777ms)

                                                
                                                
-- stdout --
	{"Name":"pause-018865","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-018865","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.53s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-018865 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-018865 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-018865 --alsologtostderr -v=5: (1.778289536s)
--- PASS: TestPause/serial/PauseAgain (1.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-018865 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-018865 --alsologtostderr -v=5: (3.081407465s)
--- PASS: TestPause/serial/DeletePaused (3.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-018865
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-018865: exit status 1 (33.069306ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-018865: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-184520 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-184520 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (327.815962ms)

                                                
                                                
-- stdout --
	* [false-184520] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:15:15.417092 1593716 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:15:15.421991 1593716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:15:15.422007 1593716 out.go:309] Setting ErrFile to fd 2...
	I1108 00:15:15.422015 1593716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:15:15.422357 1593716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-1449649/.minikube/bin
	I1108 00:15:15.422789 1593716 out.go:303] Setting JSON to false
	I1108 00:15:15.423931 1593716 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25065,"bootTime":1699377451,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1108 00:15:15.423997 1593716 start.go:138] virtualization:  
	I1108 00:15:15.427420 1593716 out.go:177] * [false-184520] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 00:15:15.429620 1593716 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:15:15.429680 1593716 notify.go:220] Checking for updates...
	I1108 00:15:15.433158 1593716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:15:15.435497 1593716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-1449649/kubeconfig
	I1108 00:15:15.437192 1593716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-1449649/.minikube
	I1108 00:15:15.439173 1593716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 00:15:15.441197 1593716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:15:15.443520 1593716 config.go:182] Loaded profile config "force-systemd-flag-185654": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:15:15.443628 1593716 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:15:15.479021 1593716 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 00:15:15.479152 1593716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:15:15.620908 1593716 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-08 00:15:15.60999968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:15:15.621012 1593716 docker.go:295] overlay module found
	I1108 00:15:15.622939 1593716 out.go:177] * Using the docker driver based on user configuration
	I1108 00:15:15.624397 1593716 start.go:298] selected driver: docker
	I1108 00:15:15.624415 1593716 start.go:902] validating driver "docker" against <nil>
	I1108 00:15:15.624435 1593716 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:15:15.626974 1593716 out.go:177] 
	W1108 00:15:15.628523 1593716 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 00:15:15.630088 1593716 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-184520 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-184520" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-184520

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184520"

                                                
                                                
----------------------- debugLogs end: false-184520 [took: 5.710341181s] --------------------------------
helpers_test.go:175: Cleaning up "false-184520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-184520
--- PASS: TestNetworkPlugins/group/false (6.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (121.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-234412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1108 00:17:56.101704 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-234412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m1.667601784s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (121.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-234412 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da684df4-c56f-4c32-b9ee-e6bb288b7de4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da684df4-c56f-4c32-b9ee-e6bb288b7de4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.03216212s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-234412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-234412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-234412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-234412 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-234412 --alsologtostderr -v=3: (12.236865905s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-234412 -n old-k8s-version-234412
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-234412 -n old-k8s-version-234412: exit status 7 (188.95032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-234412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (426.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-234412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1108 00:19:22.502476 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:19:29.655684 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-234412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m5.729544407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-234412 -n old-k8s-version-234412
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (426.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-181217 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-181217 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m7.756912986s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-181217 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [115130b2-ab67-4582-afd9-464c19e4d672] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [115130b2-ab67-4582-afd9-464c19e4d672] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.033081013s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-181217 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-181217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-181217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070915682s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-181217 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-181217 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-181217 --alsologtostderr -v=3: (12.118730183s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-181217 -n no-preload-181217
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-181217 -n no-preload-181217: exit status 7 (96.031235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-181217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (632.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-181217 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:21:19.456511 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:22:32.699854 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1108 00:22:56.101920 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1108 00:24:29.656742 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1108 00:26:19.456326 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-181217 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m31.729490818s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-181217 -n no-preload-181217
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (632.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cvq96" [01224762-22bf-4d92-b638-5be907cbc3d6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.032247213s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cvq96" [01224762-22bf-4d92-b638-5be907cbc3d6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008717713s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-234412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-234412 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-234412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-234412 --alsologtostderr -v=1: (1.127334513s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-234412 -n old-k8s-version-234412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-234412 -n old-k8s-version-234412: exit status 2 (422.442114ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-234412 -n old-k8s-version-234412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-234412 -n old-k8s-version-234412: exit status 2 (467.894256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-234412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-234412 --alsologtostderr -v=1: (1.060109823s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-234412 -n old-k8s-version-234412
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-234412 -n old-k8s-version-234412
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-824910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:27:39.147543 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1108 00:27:56.102310 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-824910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m21.846384635s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-824910 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [296d7abf-96e3-4898-a52e-985e003655cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [296d7abf-96e3-4898-a52e-985e003655cd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0331654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-824910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-824910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-824910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11786509s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-824910 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-824910 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-824910 --alsologtostderr -v=3: (12.126635925s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-824910 -n embed-certs-824910
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-824910 -n embed-certs-824910: exit status 7 (103.730719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-824910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (638.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-824910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:28:56.705095 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:56.710312 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:56.720471 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:56.740712 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:56.780958 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:56.861212 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:57.021702 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:57.342290 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:57.982993 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:28:59.263573 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:29:01.824324 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:29:06.945486 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:29:17.185927 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:29:29.655687 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1108 00:29:37.666872 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:30:18.627940 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:31:19.456652 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:31:40.548186 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-824910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m37.818835809s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-824910 -n embed-certs-824910
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (638.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xxptb" [7f7c1536-fc4b-45ea-8016-1e09e4afc69f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.028523837s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xxptb" [7f7c1536-fc4b-45ea-8016-1e09e4afc69f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012496005s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-181217 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-181217 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-181217 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-181217 --alsologtostderr -v=1: (1.141722099s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-181217 -n no-preload-181217
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-181217 -n no-preload-181217: exit status 2 (377.714192ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-181217 -n no-preload-181217
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-181217 -n no-preload-181217: exit status 2 (371.733236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-181217 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-181217 -n no-preload-181217
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-181217 -n no-preload-181217
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-182018 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:32:56.102111 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-182018 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m20.439117423s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-182018 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [db93690b-9fdd-4e0b-9699-00b8f8fd7cc5] Pending
helpers_test.go:344: "busybox" [db93690b-9fdd-4e0b-9699-00b8f8fd7cc5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [db93690b-9fdd-4e0b-9699-00b8f8fd7cc5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.031655558s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-182018 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-182018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-182018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.426864949s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-182018 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-182018 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-182018 --alsologtostderr -v=3: (12.228069116s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018: exit status 7 (94.228945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-182018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (608.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-182018 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:33:56.704330 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:34:24.389308 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:34:29.655726 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1108 00:35:53.392060 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:53.397464 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:53.407760 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:53.428084 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:53.468439 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:53.548803 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:53.709197 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:54.030301 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:54.670754 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:55.951587 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:35:58.512693 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:36:02.503125 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:36:03.632961 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:36:13.873523 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:36:19.456101 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:36:34.354051 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:37:15.314314 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:37:56.101713 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1108 00:38:37.235085 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
E1108 00:38:56.705123 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-182018 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m7.567756038s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (608.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-25mvr" [d4d5345e-49cc-4890-a501-e318ffd384c4] Running
E1108 00:39:12.700638 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.028672311s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-25mvr" [d4d5345e-49cc-4890-a501-e318ffd384c4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014258735s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-824910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-824910 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-824910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-824910 --alsologtostderr -v=1: (1.196372641s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-824910 -n embed-certs-824910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-824910 -n embed-certs-824910: exit status 2 (465.879737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-824910 -n embed-certs-824910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-824910 -n embed-certs-824910: exit status 2 (514.294576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-824910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-824910 --alsologtostderr -v=1: (1.08979554s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-824910 -n embed-certs-824910
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-824910 -n embed-certs-824910
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-847479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:39:29.657028 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-847479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (46.539679284s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-847479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-847479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.334014272s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-847479 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-847479 --alsologtostderr -v=3: (1.50058535s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-847479 -n newest-cni-847479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-847479 -n newest-cni-847479: exit status 7 (92.750449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-847479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-847479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-847479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (29.719494516s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-847479 -n newest-cni-847479
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-847479 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-847479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-847479 -n newest-cni-847479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-847479 -n newest-cni-847479: exit status 2 (376.269183ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-847479 -n newest-cni-847479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-847479 -n newest-cni-847479: exit status 2 (364.795891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-847479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-847479 -n newest-cni-847479
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-847479 -n newest-cni-847479
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1108 00:41:19.456546 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:41:21.076077 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/no-preload-181217/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.515040727s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-47g9w" [8c0d023d-5a6c-41aa-8ce9-14f3187619bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-47g9w" [8c0d023d-5a6c-41aa-8ce9-14f3187619bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.017106863s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1108 00:42:56.101954 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.183737226s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-d7rgh" [7fdb5da9-3148-43db-8342-6cd6fffe4b4a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.050265306s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hrsrj" [3c8062d0-1c43-4364-a765-bae21a7ffe08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hrsrj" [3c8062d0-1c43-4364-a765-bae21a7ffe08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.012251101s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-65hwh" [fb87fd33-77b6-4b4a-96c2-ba9aa6d2c361] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.032431065s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-65hwh" [fb87fd33-77b6-4b4a-96c2-ba9aa6d2c361] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01163305s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-182018 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-182018 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-182018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-182018 --alsologtostderr -v=1: (1.158497196s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018: exit status 2 (519.083109ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018: exit status 2 (476.870278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-182018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-182018 --alsologtostderr -v=1: (1.052709982s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-182018 -n default-k8s-diff-port-182018
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.72s)
E1108 00:48:37.947298 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/default-k8s-diff-port-182018/client.crt: no such file or directory
E1108 00:48:38.834591 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/flannel-184520/client.crt: no such file or directory
E1108 00:48:48.187542 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/default-k8s-diff-port-182018/client.crt: no such file or directory
E1108 00:48:49.075337 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/flannel-184520/client.crt: no such file or directory
E1108 00:48:56.704974 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
E1108 00:49:08.668214 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/default-k8s-diff-port-182018/client.crt: no such file or directory
E1108 00:49:09.555901 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/flannel-184520/client.crt: no such file or directory
E1108 00:49:29.655830 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1108 00:49:33.235097 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:49:49.628383 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/default-k8s-diff-port-182018/client.crt: no such file or directory
E1108 00:49:50.516150 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/flannel-184520/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.625137412s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1108 00:44:19.148633 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
E1108 00:44:29.655759 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/ingress-addon-legacy-878254/client.crt: no such file or directory
E1108 00:45:19.749881 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/old-k8s-version-234412/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m14.491399267s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2ljl9" [8caacb33-fab0-4ad4-b1db-13a194211b41] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.04443452s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xmjmr" [93dbf880-569f-4096-ae13-7e1791b824b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xmjmr" [93dbf880-569f-4096-ae13-7e1791b824b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.029544866s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kqtfq" [dee45f06-65b7-47c3-a20d-dced6485e8f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kqtfq" [dee45f06-65b7-47c3-a20d-dced6485e8f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.013545055s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m31.072189638s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1108 00:46:19.457011 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/functional-421985/client.crt: no such file or directory
E1108 00:46:49.393075 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:49.398284 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:49.408518 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:49.428778 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:49.469702 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:49.550047 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:49.711082 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:50.031621 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:50.672467 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:51.952677 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:54.513221 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:46:59.633849 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:47:09.874306 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
E1108 00:47:30.354505 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/auto-184520/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m37.703033398s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wc8g7" [950c93e5-4f75-49c2-9374-3287aa1f3da8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.033265463s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-26cmt" [a8d3d6bb-50fd-4c92-b0e8-499a5e8be90d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-26cmt" [a8d3d6bb-50fd-4c92-b0e8-499a5e8be90d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.009686503s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ftdq4" [3cb2a8fb-0241-4b4e-beb9-f4cdc3f30f43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 00:47:56.102116 1455019 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/addons-862145/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ftdq4" [3cb2a8fb-0241-4b4e-beb9-f4cdc3f30f43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.013967995s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-184520 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.474728321s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-184520 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-184520 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x84wk" [1bd0cc93-96bb-4ae8-ab4d-77548b5ea021] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x84wk" [1bd0cc93-96bb-4ae8-ab4d-77548b5ea021] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.011643189s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-184520 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-184520 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    

Test skip (29/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.7s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-028350 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-028350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-028350
--- SKIP: TestDownloadOnlyKic (0.70s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-677129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-677129
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-184520 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-184520" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-184520

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184520"

                                                
                                                
----------------------- debugLogs end: kubenet-184520 [took: 5.846286075s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-184520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-184520
--- SKIP: TestNetworkPlugins/group/kubenet (6.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-184520 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-184520" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17585-1449649/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 08 Nov 2023 00:15:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: force-systemd-flag-185654
contexts:
- context:
cluster: force-systemd-flag-185654
extensions:
- extension:
last-update: Wed, 08 Nov 2023 00:15:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-185654
name: force-systemd-flag-185654
current-context: force-systemd-flag-185654
kind: Config
preferences: {}
users:
- name: force-systemd-flag-185654
user:
client-certificate: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/force-systemd-flag-185654/client.crt
client-key: /home/jenkins/minikube-integration/17585-1449649/.minikube/profiles/force-systemd-flag-185654/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-184520

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-184520" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184520"

                                                
                                                
----------------------- debugLogs end: cilium-184520 [took: 6.341227547s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-184520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-184520
--- SKIP: TestNetworkPlugins/group/cilium (6.57s)

                                                
                                    
Copied to clipboard